Excerpted from a transcript of a panel discussion on Cyborg Liability moderated by Rita Heimes, Research Director at IAPP at RSA 2016.
Can you tell us more about the liabilities in a cyborg future?
MJ Petroni: In the future, as we increasingly make augmentations to our bodies—implants, sensors—as we're adopting these cyborg technologies, we're looking at devices that enable people to have better health outcomes, but also technologies that hackers can access, like sending fatal doses to hospital drug pumps.
For example, Vice President Dick Cheney had a pacemaker installed and was given the option to have remote telematics, a technology that would have the pacemaker be updated or adjusted by his doctor, without opening up his chest. He chose to have that disabled. No matter what you think of Cheney, it was a very prescient decision, because not many years later, it was shown that it was very easy to hack those systems and rather inconvenient to upgrade them to be secure. It was a very real risk that someone could hack his pacemaker, and have a shot at harming him.
It seems health care providers could be a particularly lucrative target for hackers.
MJ: Absolutely. And the implications of what patients are agreeing to have changed, and aren't always clear. Recently, in the news, there was a hospital in Los Angeles that was held for ransom by hackers. The hospital ended up paying the ransom because it shut down the medical operations and it was endangering human life. That doesn't work, and most hospitals aren't yet protected against it. Most of those systems are so hard to upgrade, that it would be difficult to quickly defend against cyberthreats. At the same time, we're introducing new and novel technologies that allow for connectivity with medical devices, which is a benefit for the consumer, but a potential security threat as well.
And who's liable when a medical device is hacked? Is it the software developer? Is it the code maker for the particular chip that was designed for security? Is it the database that gave the information out that could have been hacked? How do you actually apportion liability in devices where there's thousands and thousands of different stakeholders, throughout the process of development?
What else do medical technology developers need to keep in mind to strengthen security?
MJ: It's not just hackers we need to worry about, but machines-to-machine communication. Who's liable when a machine gives you the wrong medication? This is something that happens occasionally in pharmacies, with human error. At least we know what we're taking on there. We have a machine that seems trustworthy and operates 99.99% of the time, correctly. Do we start questioning what it's giving us, the same way that we would with a human pharmacist, verifying what it says on the label, for example?
Another is, who's liable when software updates are a matter of life and death? We're dealing with situations where we’re looking at augmented human experience, in the sense. Not just where we replace damaged parts of the human body, but actually augment them. Let's say you upgrade someone's ocular implant or replace it, if that digital layer is what a person depends on for vision, and it fails, how do you manage pushing out the software update to remedy it?
As the threat of hacking becomes more personal, and more about our health and wellbeing, these questions of who's liable for securing us become more pressing.
Subscribe for Email Updates
Sign up to get emails of the latest Causeit articles, videos and white papers.