LCA 2019 Day 3

Today’s opening contains a plea for compact seating in the form of shell: while true; do tar zcf /dev/{c1,c2,c3,a1,a2,a3} lecture_theatres.gzip ; done so that’s nice.

As always there is a reminder that the ozone layer is thinner down here than people expect. You can buy accessories for your Raspberry Pi should you so wish.

We’re two for two on the door prizes! And a reminder to participate in the charity raffle.

Keynote: #WeAreNotWaiting: How open source is changing healthcare

Dana Lewis

Dana explains that she has Type 1 Diabetes, and asks us to think about what life is like managing a chronic illness. Sure, in theory it’s easy - exercise, measure sugar, and so on - but walks us through the complexities that a chronic illness actually imposes: once the shockl of diagnosis wears off, she has to grapple with the reality of insulin management. THe stress of, for example, speaking in front of a lot of people will spike sugar. How to you predict that? How do you deal with the lag between injection and the insulin taking effect.

Sure, there are some tools that help with this: there are measuring and guidance tools (a CGM) and an insulin pump. You can continuously monitor and modify the pump’s dosage, but the reality is that there’s a huge cognitive load and acutally use that. Moreover, the devices don’t talk, leaving the human moving data; incompatibility between devices, problems with data extraction… these are all day to day problems. Dana explains that alarm fatigue also interferes with the utility value of her pump: it’s possible to sleep through your alarm and literally die in your sleep. Imagine spending every day worrying you won’t wake up. And she can’t even change her alarm, or get it to work with her cellphone.

That started to change in 2013, when he saw a programmer who had hooked a CGM to send its data to him, to help monitor his son. Dana approached him to share that code - and in the first month of running the software (which dropped alerts into Dropbox, and then alerted her phone), and she picked up 200 alerts she otherwise would have missed. It also allowed her to escalate to others if she missed an alarm; she nicknamed it her “DIY Pancreas”.

From there, Dana wanted to share. She knew many other people who had the same challenges. She met people who had reverse engineered the protocol of certain pumps, so now she had everything she needed to build a closed loop system where the CGM could direct the pump to take actions via a computer. This is what computers are good at! Every 5 minutes, the CGM data dumps information to an Intel Edison running on OpenAPS.

The result? Dana’s graphs, within a week, moved from peaks and troughs to near flat lines: consistent, predictable blood sugar levels that a non-diabetic would have.

It’s amazing. And Dana thought about the fact that she had been helped by so many people, and felt it would be unethical not to open the work for more people to improve their own lives with. That’s the genesis of the OpenAPS project: and open and transparent Artifical Pancreas System (APS) to reduce the burden of Type 1 diabetes.

A huge amount of work has gone into thinking about safety: limiting dosing ability, responding to unexpected data, failure tolerance, and failing safely. And because this system has been designed and built by people who live with diabetes, that focus on safety is informed by what matters to people who can die if it goes wrong. Dana notes, as an aside, that the system relies on a security hole: the pump has a flaw that allows unapproved remote communication, and that’s what OpenAPS relies on to close the loop.

If you’re interested, start with the Reference Design, which is written from the perspective of how a human manages diabetes.

Around there world, there are thousands of people using OpenAPS, which has been a pleasant surprise. Dana expected that one day there would be a commercial product that would one day replace it. But instead, people trust and rely on it: there are over 9 million hours of experience with APS. In the commercial world, though, things can come to market with as little as 150,000 hours of data from trials. Dana refuses to let this data goes to waste: this data is available, suitably anonymised, whether to individuals who want to learn to manage their own devices betwee, academics reasearching diabetes, or companies building devices to manage diabetes. And that is how open source and open data are revolutionising healthcare.

Another story: a parent who introduced APS to help manage their 2 year old’s diabetes. It moved the family from a world where that had to intervene manually an average of 4.5 times per day (feeding tablets, monitoring, changing pumps) to less than one. A third story: a school age child who had to visit the shcool nurse on average of 2.3 times per day; once his son started using OpenAPS reduced this to 5 visits for a whole school year. This revolutionises the life of the person who has diabetes, but everyone who helps them: teachers, parents, grandparents.

The coolest thing has been the evolution of the software and hardware in the community: people rock up and make things better. The hardware is a good example: in 2014 the stack was physically cumbersome: a full sized Pi and a lot of clunky power. A person in England helped moved it to the Intel Edison (which they love for it’s small size, and are gutted to have it discontinued), which massively shrunk the form to make it more portable. And this has continued: people using an Hat to make it cellphone-independant, for example.

Dana would like you to think about the risks - the security around the stack, for example - the quality of life vastly outweighs the risks. THe risk of routinely dying is a lot lower than the risk of a hacker. The biggest challenge, though, is credentiallism: people sceptical that they can contribute because they don’t have a particular title or label.

Finally, the iterative model of development common to open source is vastly better than the commercial model: commercial models focus on building a complete saleable product some day later; OpenAPS has made things better in small steps, and each small step matters: for Dana it started with “build a better alarm” and ended with APS, years before the commercial healthcare industry built anything comparable. And they’ve kept improving.

“With open source, there are endless opportunities to improve lives of those we love.”

Taking Technology to the Birds: The Cacophany Project

Clare Mclennan @crazyKiwiClare and Menno Finlay-Smits @mjs0

“An open source startup dedicated to increasing bird song in New Zealand by applying modern IT techniques to predator control.”

They consider bird song a useful indicator of the health of bird populations. They’re a charity that openly shares code and ideas, and work with other organisations, and are keen to encourage contributions.

  • Our birds a taonga.
  • Many are unique in the world, and evolved without mammals.
  • We currently spend $70 million per year on controlling pests; “can we get some to zero”.
  • Agricultural benefits: TB is spread by possums.

The current state of the art for tracking and controlling predators include chew cards (baited cardboard), trapping. Traps unfortunately have a very low level of intereaction (less than 1%) so aren’t very effective.

So how much better could Cacophony-driven traps work? Menno gives some numbers:

  • Cover 100 times the area.
  • Target 4 types of pest.
  • Catch at least 10 times as often.
  • Multi-trap (auto-reset).

The platforkm consistes of an audio records, a thermal video platform, and a sidekick application. These call back to a cloud presence: machine learning and storage fronted by an API.

Hardware

Cacophonometer

  • Quantifies birdsong over time. A recorder on cheap android phones.
  • Provides long term trends and species identification.
  • Beginning to categorise species based on song via machine learning.

Thermal Video

  • Thermal camera - many of the animals are active primarily at night.
    • Lepton 3 thermal camera.
  • Raspberry Pi.
    • 3G/4G modem.
    • Custom hat - real time clock, camera socket, power management.
  • Feeds into a machine learning system to recognise different animals: cats, possums, rats, etc.

Challenges

  • Weatherproofing has been a huge challenge. They started with gel, and lost a lot of devices. They now use better seals, weatherproof connectors for everything, gore valves for letting pressure in and out, and silica gel to remove moisture.
  • The Lepton 3 camera is good bang for the bug, but they are difficult for the developer to use reliably. Performance of reading is critical; they had to move from inline Python to a dedicated Go process running with Linux RT priority.
  • Power: once they wanted to move beyond being hooked up to mains power on the property, batteries were required.
    • Early use of commodity battery packs didn’t work well.
    • They don’t respond well to the sleep state.
    • They now use custom batteries.

Processing the output.

They use machine learning to detect motion and analyse.

  • You can’t rely on dual-sensor techniques from the hunting world or trail cameras, because a PIR doesn’t notice small animals, so they have to use the thermal camera the whole time.
  • It’s fast and detects small animals well.
  • Unfortunately it’s power hungry and suffers in the wind.
    • The wind is still a challenge: false triggers occur as objects are moved by the wind.
    • Help is welcome!

Even before classification, there are huge benefits. Understanding, for example, that a trap is not placed where the predators are roaming, or that the bait is placed in a way the animals can pull it out without trigger the trap. We can give people knowledge they didn’t have.

The classification pipeline locates the animals - the warm spots per frame - and then link the frames together to create an animal “track” for the machine learning system to look at. Noise is removed to allow the ML system to focus on the track. The system is working with as few as 48x48 pixel grids, to it’s remarkable how well things can be detected. Occlusion - animals moving behind objects - is a problem and a current focus.

The main animals being focused on are birds (divided into “birds” and “kiwis”), rats, cats, mustalids (stoats, ferrets, and weasels), hedgehogs, and possums. The fundamental divide, of course, is predator vs non-predator distinction. WHile it’s interesting to distinguish between possums and hedgehogs, it’s more important to note they’re a predator.

Training the model is done from a three second, 48x48 pixel unit. These are fed into a training set, a test set, and an evaluation set. Running through a training session will take around 6 hours. They use NumPy, OpenCV, and TensorFlow as their tool. The videos are returned with the classification and other feedback from the machine learning algorithm to give feedback.

Next steps:

  • Interpreting track results.
  • More regular model training.
  • More animal classes.
  • Improved track extraction.
  • Run classifier Pi.
  • Different models - animal? Predator?

All of this is exposed via a web application and visualisation of what’s going on in the fleet.

Near Future

  • Machine learning onboard for remote sites that can’t call into the cloud.
  • Automated, accurate predator monitoring. This is a big deal for improving predator control.
  • Cacophony Index from bird recordings to measure the effectiveness.
  • Audio lure experiments - some animals will be more likely to enter a trap.
  • Pair cameras with “open architecture” traps.

Far future?

  • Shoot predators with toxins. That seems, uh, like a very Heath Robinson version of 1080 drops.
  • Drone-based versions of the surveillance.

I’m surprised that there’s not interest in environmental destroyers such as pigs, deer, etc.

Firmware security, why it matters and how you can have it

Matthew Garrett

Just what is code? Code is bugs. “You wouldn’t believe the number of bugs I can write in a minute. I’m very good at it.” And the more software we have in the firmware, the more bugs there are. We care about this because the firmware is the root of trust for the system; if someone can compromise your firmware, they own everything.

This is not a theoretical risk! We have seen firmware rootkits in the wild - and reinstalling your OS does nothing.

So what’s the attack surface for firmware? Binaries, filesystems, physical access, and a terrifying amount of runtime stuff.

So far the industry approach to protecting firmware from physical attack is pretty limited: TPM attastation or signing, for example. But your bootloader still needs to load a bootloader, parse it, and execute things. That’s a lot of stuff to do! And it means that giving the firmware a malformed binary or filesystem may be a good way to compromise the firmware. And worse, there are usually interfaces for the OS to ask the firmware to do things, whether an old-school DOS interrupt or a modern UEFI interface. Worse yet, the old DOS interface - which has no protection - still exist even on new systems!

Firmware has always been this bad. It matters more now.

Where is your firmware? Everywhere:

  • Motherboard chipset.
  • Embedded controller.
  • Network hardware.
  • Storage device.
  • Everywhere. (It’s in the house.)
  • It’s everywhere. (It’s outside the hardware.)

A recent paper found that some encrypted SSDs had magic, undocumented calls which gave you the ability to bypass authentication and access the decrypted data.

System Management Mode was introduced in the 386 line as the first laptop chip, used to run certain elements of the laptop hardware: switching fans off and on, for example.

  • CPU trap into a special priveleged mode.
  • Runs code that lives in hidden RAM.
  • Which executes user provided arguments, with predictably awful outcomes.
  • SMM does useful things, like stop the OS overwriting the BIOS flash. But you can also, say, misuse the backlight SMM controls, point it to overwrite kernel memory. Which is not so good.

Unfortunately Intel are not the only people who have made poor life choices; ARM looked at Intel’s mistakes and decided to go one better: the Trustzone, which is a whole operating system stuffed into the Trust. The idea is you can keep things you want to hide from the regular OS. A common use is hiding DRM code which is run in the Trustzone OS, ostensibly hidden from your regular OS. Unfortunately because Trustzone and the Trsutzone OS are made of code, this can be exploited to invisibly run malware.

Of course, not all firmware is SMM; some is ACPI (which, Matthew notes, is pretty safe as long as you don’t use the kernel feature that allows the injection of ACPI information). UEFI runtime services are completely priveleged and can do dreadful damage.

Then there’s the management engine: it’s a whole other processor in your processor, which runs before your cores are activates. It runs Minix. It is worth noting Minix is a teaching operating system, according to it’s author. People have been able, of course, to break into the management engine.

So, where are we? If attackers can trigger arbitary code execution in your firmware - and they can - we have a problem. We are, happily, getting better at building tools that reduce the severity of bugs.

Approach 1: never touch untrusted data.

  • This makes booting an OS rather tricky.
  • (Assuming you trust the firmware not to do nasty.)
  • Unfortunately we want the computer to do things. Minimising functions in the firmware is good.

Approach 2: Vulnerability mitigation.

  • Stack canaries can fire and halt execution.
  • Use ASLR to make it harder to exploit.
  • Have a non-executable stack to make it harder to exploit via a payload.

Firmware is starting to make use of these sorts of techniques - there are definitely versions of ASLR.

Approach 3: Don’t write bugs.

  • Ha ha ha.

C is security vulnerabilities as a language. People will, of course, claim that this is a result of insufficiently skilled programmers, in which case you can amuse yourself by pointing to the CVE’s they’re responsible for.

Also, code quality isn’t sufficient: configuration is difficult, and it’s easy to miss things or make mistakes that create vulnerabilities; ChipSec is a tool that can help you eliminate these.

How can we reduce attack surface?

  • Linuxboot is a payload that runs off the UEFI PEI layers (the early boot layer), giving you Linux. Of course, linux has no bugs.
  • Yabits - payload for Coreboot; a small UEFI implementation, which should hopefully be less buggy.
  • Project Mu - derived from the reference Tiano code. A Microsoft initiative to strip down the reference code, make it easier to collaborate on, and generally improve Tiano. It includes Microsoft open sourcing a lot of Surface code.

Coreboot, Matthew notes, does not save it. Yes, it can be patched more easily. But unfortunately Coreboot doesn’t really do a lot of things we want to do by itself. Instead, these are implemented in payloads. This shifts the problem to the payloads. Some of those payloads have vulnerabilities; worse yet, Coreboot doesn’t support secure boot, and it has no firmware flash protection.

What about the Management Engine? If you have physical access, you can downgrade older Management Engines to vulnerable firmware. You could buy a new computer, or just not let anyone touch your computer.

Firmware upgrades are important

You can use the Linux Vendor Firmware Service (LVFS) it’s a lot easier to update your firmware, and have it delivered by your vendor or distributor, which is awesome.

Options for non-x86

  • OpenPower has no Mangement Engine and a much smaller attack surface.
  • Some ARM platforms have sensible options like uBoot which reduces the amount of code, and hence attack surface; Matthew notes, though, he’s been able to find filesystem vulnerabilities by fuzzing it, so there are at least theoretical vulnerabilities.

Options for more advanced users

  • Chromebooks allow users to build their own firmware.
  • Not straightforward to do your own vboot keys (used to allow you to run unsigned vboots).
  • If you’re super-advanced you could, theoretically, run vboot on any Coreboot target.

Unfortunately there are no great options; it’s all tradeoffs. Free boot options don’t have all the security options, for example, but at least you can fix the firmware bugs.

Q&A

  • What is your opinion on disabling the Management Engine? You can’t. You can minimise the code running on it, but anyone claiming to disable it is lying.
  • What is your opinion on the viability of the Supermicro hack Bloomberg claimed? There is no evidence it happened, however, a proof of concept of certain aspects of the idea have been demonstrated.
  • Why does the Intel ME run Minix? shrug Well, one thing you have to think about is you have to think about maintainability. For example, while OpenFirmware is good, it’s hard to find Forth developers. So we have UEFI. It’s easy to find people who can work on Minix, and presumably the license was attractive.
  • How well will a hypervisor isolate your guest from the firmware? It depends on the hypervisor. KVM, for example, fully isolates the guest from firmware, so you’ve made the world better. Dom0 on Xen, on the other hand, defaults to access to the firmware.
  • Are there any vendors do a particularly good job of firmware? You could look at LVFS and see how many vendors are updating their firmware at a reasonable rate. Beyond that it’s hard to say.
  • Is there a meaningful difference between Intel and AMD MEs? AMD is a bit better, but it’s a question of degree more than anything.

Deep Learning, Not Deep Creepy

Jack Moffitt

Have a look at portal from Facebook: one is to be horrified by the idea of a surveillance device from Facebook; the other is to be delighted by the convenience.

These devices, particularly machine learning, have a lot of benefits, but also a lot of problems. We can’t undo these technologies, so we need to think about how to mitigate the problems.

  • Classical machine learning is a program that makes decisions, not by explicit code, but by calculating models form input data.
    • Jack offers the example of a language detector that can analyze text and determine the probably of it being a given language.

So how is deep learning different learning? Well, in a lot of ways. For starters, classic machine learning relies heavily on humans to select the inputs, and to correct the models. Deep learning has a gang on simpler functions that are combined to produce a result. The simple functions can be layered to learn from layer after layer. We don’t need to do feature selection any more, because the functions can “work out” what matters.

(At this point, we dip into the how the deep learning networks operate and interact. As we rock through “this is where we use a bit of fun calculus"the talk gets smarter than I am.)

You can do amazing things with deep learning - using adversarial networks to make convincing looking humans, for example. But there are also dark sides.

For starters, deep learning has an infinite appetite for data. Machine learning has an upper bound on how much data is useful; deep learning has no known upper bound. This drives companies to want infinite access to data - infinite access to you. Alexa and Google Voice want everything everyone has ever said because it makes the model better. This creates obvious risks and hazards.

Deep learning requires massive amounts of power. They typically won’t run on edge devices, centralising compute and data in an ever-smaller group of hands. There is a disincentive to be efficient, and tends to reward monopoly. Deep learning can guzzle even more power than crypto mining.

Following on from that, deep learning is often impossible to understand. You can’t debug results and understand why you’re getting results you don’t expect. You can’t understand a million-deimensional vector. Fixing a broken model has only a few coarse corrections, and they are basically black boxes. This is especially a problem with bias: assuming black faces are “gorillas”, assuming doctors are men, discarding women’s CVs. These are amplifying bias, and there are no general, well-understood fixes.

Mozilla and speech recognition

Why?

  • A handful of proprietary companies owned all the effective speech recognition systems.
  • You have to supply $$$, data, or both to use them. Serious privacy and access concerns.

Mozilla’s deepspeech initiative gives you high-accuracy speech recognition that can run on your device, protecting your privacy and allowing access.

Common Voice is an attempt to expand beyond English. You can volunteer to read transcripts to train models; currently 20 are in progress. They are keen to make sure languages which are not commercially viable are supported.

Deepproof

One of the most popular extensions for Firefoxz is Grammarly. This sends, key by key, everything you type to a private cloud provider with the promise of grammar proofing and spelling.

Mozilla are using Wikipedia and real-life data from Lang8 to tran a network on the real-world use of language, in order to make a gammer and spelling checker that doesn’t require you to run a keylogger. It’s going well, but not really production-ready.

lpcnet

  • Current text to speech architecure are not end to end; there are two disconnected networks that analyze text and create wave forms as seperate tasks.
  • THe standard Griffin-Lim approach is not very good.
  • WaveNet, Google’s imrpovement is 10 GFlops+ and can’t be run on an edge device.

lpcnet is an attempt to create an edge-compatible, high-quality text to speech by using DSPs to process the deep learning system; it currently requires 1.5 to 6 GFlops, which can run on a mobile device in real time, with better quality than different approaches; you can also repurpose it for compression, noise suppression, time stretching, and covering up packet loss on e.g. a video call.

Q&A

  • Why is it called deep learning? Because neurons and layers are massively stacked. The diagram showed 5x5 vectors, but in practice real learning systems have tens or hundreds of millions of neurons.
  • Can you use it for musical analysis? Not that model, know. It probably wouldn’t even understand singing.
  • You talk about languages, but does it include dialects? Common voice does collect different dialects.
  • Are faulty models being destroyed? Don’t know. One problem is that we can’t work out if the model is faulty. If your parole is mis-selected, how do you even prove that? Moreover, when talking about companies at Google scale, they have no appeal process - they don’t care about mistakes that effect individual customers.
  • How do you put models on the edge? You can trade accuracy for requirements; you can use lower precision in the maths. That’s why lpcnet they’ve replaced parts of the network with more efficient mechanisms. With that trade off, though, it’s hard to know how to encourage people to demand models that are device-local.

In conversation afterwards Jack noted that another concern he has is the lack of hardening around all of this; most deep learning has not really been tested against malicious attempts to derail it; he noted, for example, that Alexa can be convinced to respond to ultrasound without the owner’s awareness.

What’s new in cgroups v2

Michael Kerrisk

Why do we have cgroups v2? Well, v1 was a pretty ad-hoc effort. It evolved into a bit of an inconsistent mess, sadly. After about three and a half years, it was released with Linux 4.5 in 2016. You can, with some limitations, mix cgroups v1 and v2 on the same system, but processes can only be managed by one controller at a time.

  • The v1 scheme permitted many controllers and many hierarchies. Unfortunately that flexibility wasn’t really practically useful, and did create problems.
  • Utility controllers that might be useful in all groups can only be used in one - for example, you can only freeze one hierarchy atomically.
  • In most cases, orthogoanal (non-arallel) hierarchies were not needed.
    • So v2 sticks to a single hierarchy, with all the controllers available on one hierarchy. There is an additional mechanism that allows you to change granuarity within a single hierarchy.
  • Thread granularity makes no sense for some controllers - there’s no point in having thread-level controllers for memory controllers, for example.
    • The first release of v2 threw out sub-process granularity.
    • That has been walked back a little.
  • v1 allowed a cgroup to contain tasks and child groups, which would see two types of entity competing for the same resources, and different controllers would make different decisions about the same decisions.
    • v2 allows processes to exist only in leaf groups. Groups (tasks) cannot co-exist with processes, in general.
  • Controllers were wildly ionconsistent in terms of API design and implementation.
    • v2 has a set of consistent, documented rules around API design.
  • Group release notification was a great feature, allowing you to discover that a cgroup has become empty so you can do something when the last task exits. That’s great, but it was a tremendously expensive operation. This has been fixed in v2.

Cgroups v2 controllers are close to parity with v1 controllers; some are essentially identical to their v1 equivalent, while others have required a full rewrite. The most important blocker had been the CPU controller; that was released last year. The missing controllers are the freezer controller and hugetlb. The former is coming Soon™ while the latter may simply be dropped.

You need a cgroups filessytem under sysfs; systemd will do this for you. All the controllers automatically become available in the hierarchy without an explicit mount option; each of the directories in the hierarchy tells you what is available via the cgroup.controllers file. If no controllers show up, the problem may be that it is une in v1; you’ll have to unmount it from there first. Unfortunately you need to unwind everything in the v1 hierarchy to make it happen; that includes moving all the processes out of the hierarchy, and removing all the child cgroups as well.

In a worst case scenario you may need to boot with the cgroup_no_v1 parameter. systemd will be sad, but it will cope (and is starting to understand v2).

cgroup.controllers only tells us which controllers are available; the cgroup.subtree_control file is used to enable and disable the controllers with echo '+pids -memory' > cgroup.subtree_control. Note that the controllers are enabled for the subtree not the current group. When enabled, the interfaces in the form $controllername.control will be enabled in the subtree. This is also the mechanism for enabling differing granularity of controllers in the hierarchy, allowing them to be turned off and on for different parts of the tree; moreover, you can’t override ancestor parameters with child groups.

As in v1, you can organise your hierarchy with a set of subdirectories; all processes are initially created in the root cgroup, and you change theim my moving them to different hierarchies. But there are some important differences:

  • In v2 there is never any control over processes. You must start them in children.
  • A cgroup can’t, as a rule, contain groups and process; only leaves can.
    • It’s slightly subtler than that; you can’t have controllers and processes in the same group.
  • For cgroup notificfations v2 is radically different.
    • There’s a cgroup.events file, which has either a 1 or 0; depending on whether there are processes or not.
    • You can monitor the file with any monitor API.
    • You can have one or more processes monitoring hierarchies, looking for changes.
  • Delegation existed in v1, but was not well-designed. It exists in a more sophisticated way in v2.
    • A key difference is that you can delegate control over a part of the hierachy to non-root users.
    • The mechanism is that the delegator gives the delagatee ownership of the files in the hierarchy.
    • Delegation in cgroups v1 was an accident, and raised security concerns.

Right to Not Broadcast

Confronting the problems with medical devices - their security and reliability problems, and the opacity of their source code - was the start of Karen’s journey into free software. She co-authored a paper entitled Killed by Code: Software Transparency in Implantable Medical Devices; she was very focused on transparency as the key issue. She was “so confident that this was the most important thing” that she neglected the other three of the four freedoms. However, when she began receiving un-needed shocks - un-necessary treatments - during her pregnancy it made her realise that the most important point is control, not transparency. This isn’t just about even medicine; even mundane cases like automatic soap dispensers will fail for people with dark skin, because none of the testers or developers did. So much technology is designed by narrow groups of people this is a real problem!

Unfortunately for Karen, it’s become harder and harder to find medical devices than can’t be remotely managed. That’s particularly unfortunate because her role with Outreachy attracts threats from people; there is a real risk to her wellbeing from devices that can be remotely controlled, especially since the devices in question are known to have remote vulnerabilities. Karen had a certain level of comfort that people require being in physical contact to mess with her.

Until reasearchers showed that there are literally thousands of vulnerabilities in the programmers used to manage pacemakers. Which they found by buying them off e-bay. And found data for thousands of patients. Even in the case of Dana’s talk this morning - where this insecurity is a problem - it’s simply the flip side of the same coin; the lack of control is the same.

Another example is CPAP machines: patients discovering (in the US) that their CPAP machine is covertly sending data to insurers. Which are referred to by their call centres! Again, there is a lack of control and consent. And at a simple consumer level, try finding a non-smart TVp; smart TVs, like home assistants pervasively monitor and send your conversations and behavioural information to a third party, often without meningful consent, for uses your haven’t anticipated.

Karen, by the way, is a proponent of the GDPR; she views it as a tremendous gift to the world based on the ripple effect it’s had. The key element is she would like to see become more pervasive is the idea that meaningful, revocable consent must exist for technology. Incorporating connectivity means we can never be totally in control of our critical information; even where laws are on our side, the accidental effect of vulnerabilities can be awful. Likewise, just because a government or company is trustworthy today doesn’t mean things can’t change.

A huge problem, Karen argues, is that most of us don’t really understand the scope and meaning of our consent when we give it; and even when we do we may not have any meaningful alternative. So how do we opt out? Well, we need to continue to support disconnect options, or at least use the product in a non-centralised way.

Look at Fitbits: Karen would love to have one! She loves technology! But she can use so little of it - unless she’s prepared to consent to surveillance capitalism.

And what, by the way, is health information? Well, it encompasses more and more of our lives. Our meals photos could be hoovered up and analysed to decide whether we should get healthcare. Our driving habits, our sex lives, our friends, anything really. In the same vein, what do we consider critical software? We see smart home gadgets being use to exert control over their victims.

So consent, to be meaningful, need to have realistic alternatives, and to have them presented as part of the consent process. Ask yourself, for example, why it is that the people at ground zero of surveillance capitalism - Silicon Valley - are mostly forbidding tech to their kids. No tablets in schools, nannies instructed to prevent phone use, and so on. There’s a real class element here, too.

Karen once hoped that consumer-driven action would help solve this problem, but she sees that it doesn’t seem to be happening. Reverse engineering, opting out, these things are all good, but they’re band aids on a system that is broken. We may need to resort to regulation, because we see manufacturers countering every technological or small legislative effort.

Share