Many of the things we do to each other in the 21st century –both good and bad – we do by means of smart technology. Drones, robots, cars, and computers are a case in point. Military drones can help protect vulnerable, displaced civilians; at the same time, drones that do so without clear accountability give rise to serious moral questions when unintended deaths and harms occur. More generally, the social benefits of our smart machines are manifold, the potential drawbacks and moral quandaries extremely challenging. In this talk, I take up the question of responsible innovation drawing on the European Union experience, value sensitive design, and reconsidering the relations between ethics and design.
Jeroen van den Hoven is a professor of Ethics and Technology at Delft University of Technology. He was the first scientific director for 3TU/Ethics and is currently editor-in-chief of Ethics and Information Technology. In 2009 he won both the World Technology Award for Ethics and the IFIP prize for ICT and Society for his work on ethics and ICT.
The Internet of Things (IoT) is quickly expanding the next big product in its interconnected family – the smart watch. While these high-tech watches are not necessarily new, recent releases from companies like Samsung, LG and Apple have given them a more mainstream public appeal and market share. In welcoming the watches to the Internet of Things, customers are also introduced to the various privacy and security questions that researchers and governments have scrutinized in IoT. This past month Tech Policy Lab members participated in the Workshop on Usable Privacy & Security for wearable and domestic ubIquitous DEvices (UPSIDE). The workshop brought academics from around the country to discuss various IoT privacy issues. The FTC addressed these issues on a larger stage last fall, hosting an IoT-focused workshop to identify and address privacy and security problems. As the smart watch maneuvers its way onto the wrists of customers, wearers should take note these problems.
The health and fitness craze that inspired products like FitBit and Garmin’s Connect has strongly impacted the design of recent smart watches. Products like the Samsung Gear Fit and the Apple smart watch have incorporated pedometers and heart rate monitors to allow users to measure their daily activity. With the increasing prevalence of health monitoring in everyday technology, it was no surprise that the FTC devoted one of their four workshop panels to the topic last fall. The “Connected Health and Fitness” panel brought in business and academic experts to discuss the benefits as well as the data privacy and security risks involved. As the smart watch’s popularity increases, preventing these risks will become imperative. Data theft could lead to location data and activity patterns being extracted from the pedometer or heart rate readings. Risks could even come from the company itself. What would happen if a health insurance provider was being sold health data collected from a prospective candidate’s smart watch? Protecting this data from both unwanted collection and use will be a necessary measure to ensure privacy in the age of the internet-enabled watch.
One of the papers presented at UPSIDE was on the hunt for privacy flaws in IoT products like Google Glass and the smart watch. Titled, “When Everyone’s A Cyborg: Musings on Privacy and Security in The Age of Wearable Computing,” the paper by Serge Egelman highlighted one of the major issues – “the continuous capture of audio and video.” While UPSIDE only provides the abstract for the paper, extrapolating where audio and video recording could breach a smart watch user’s privacy is easy. A user wants their watch to listen while giving it instructions, but unwarranted recording would present a security risk. The same is true for the camera. Unchecked recording devices could leak the extremely personal data of an unknowing user. Similar to the health and fitness privacy risk, this data could be misused by a thief or third-party company by way of reconnaissance, blackmail or harassment.
Privacy in the land of smart watches is not entirely hopeless, however, as it may fare better than other IoT devices in some instances. Specifically, updates and patches to address security issues could be much more common on a watch than a product like an electrical grid monitor. The issue is something the FTC addressed in its questions to the public for their conference. Two of the questions identified the FTC’s attention to the issue: “How can companies update device software for security purposes or patch security vulnerabilities in connected devices, particularly if they do not have an ongoing relationship with the consumer?” as well as “Do companies have adequate incentives to provide updates or patches over products’ lifecycles?”
In both cases the smart watch poses an optimistic answer. For the first question, the smart watch is likely to be exempt, as it will engage any user enough to form an ongoing relationship with them. By delivering notifications, sounding morning alarms, and even telling the user the time, it directly impacts and interacts with a user’s life on a daily or even hourly basis. Adequate incentives to provide updates are present as well. The smart watch is one of the newest hardware endeavors among competing companies like Apple, Samsung and Google. For one of these companies or even a smaller one to have their product succeed, they must convince the customer it is better than the rest and that it is something worth buying in the first place. A product that has unpatched security threats would do neither.
The smart watch fits into the privacy and security discussions of the IoT just as well as it fits onto a wrist. Prevalent issues such as health monitoring and audio or video capture could cause serious risks to consumers. Eliminating these risks and providing a safe product will be vital to the product’s success, bringing a great new addition to the Internet of Things family.
Bill Gates once predicted we would have a robot in every home to go with our personal computer. James Temple of is calling Jibo—a new personal robot to be sold commercially in 2015—“one of the most ambitious and affordable robots for the home that [he has] seen.” I agree.
Developed by social robotics pioneer and MIT professor Cynthia Breazeal, Jibo represents a sleek, responsive, versatile platform that can help individuals and families negotiate daily life. Here is the video Breazeal’s startup put together:
There are at least three things that make Jibo really exciting. The first is the price point, as little as $500 if you preorder. (I did.) This price point reflects just how much the costs of sensors and other components of robots have come down in recent years. A second exciting aspect of Jibo is that developers will be able to write apps for it. It appears as though Breazeal’s team expects a marketplace for Jibo software. A third is that the team obviously put an immense amount of thought into how Jibo will interact with its users—if the video is any indication, we should expect less Siri and more Her.
These are ingredients of success there. Jibo could well be the Apple II of robots—the first popular entrant that opens the door to personal robotics as a true household staple. But it will face hurtles along the way, including from law and policy. I’m going to explore a few here, with some unsolicited advice to Jibo and lawmakers thrown in along the way.
A few years ago, Tadayoshi Kohno, Tamara Denning, and some other of my colleagues at the University of Washington bought some Internet-enabled home robots in order to assess how hackable they were. It turns out they were plenty hackable. According to their research, not only could hackers see and hear what the robot did, but they could move it around. Thus, a compromised robot could in theory grab your spare key and drop it out the pet door. I was particularly struck by their finding that robot Internet traffic looks different enough that someone sniffing around a network could isolate robots from other devices and target them specifically.
Hopefully Breazeal and her team will take security seriously right from the outset. Many startups do not. Everything can seem fine until a high profile hack or research like Kohno’s shows how vulnerable the product makes the consumer. Even in the absence of a hack, the Federal Trade Commission can and will use its authority to police against unfair and deceptive practice to ensure that a device—including a robot—has adequate security. It is, if anything, even more important that a social robot like Zibo be hardened against external interference.
I had occasion to visit the Microsoft Home of the Future a few years ago, a mock house meant to showcase forthcoming technology. It was really neat. Part of the demo involved placing items on the kitchen counter until, suddenly, the light softened and a voice came out of nowhere. “Hello, I’m Grace. It looks like you’re baking a cake. Do you need a recipe?” It is all well and good for Grace to watch and respond if you are baking. But what if you are using the kitchen counter for, um, a different activity?
Jibo has sensors and will live your home, and so it raises all the usual privacy concerns that attend devices which gather, process, and store information. But Jibo is also a social platform. There is a long literature, of which Breazeal is not only aware, but helped pioneer, suggesting that humans are hardwired to react to anthropomorphic machines as though a person were present. This includes the feeling of being observed. What this means is that you will never feel alone with Jibo. Your few remaining opportunities for solitude could disappear. And activities that feel very transactional today—typing a phrase into a search box—will become conversations. You won’t search for “symptoms X, Y, or Z” but ask Jibo, “Can you help me find out more about hemorrhoids?”
It will take very clever design, at any rate, and likely much iteration, to properly balance a welcome social feel with a constant source of judgment. Moreover, not only will Breazeal’s very expert team face this issue, but potentially every app developer with access to the robot that can influence the interface.
In his book Robot Futures, roboticist Illah Reza Nourbakhsh explores the prospect that social robots will be used to manipulate consumers and citizens. BJ Fogg, Ian Kerr, and I have all made similar arguments in the past. The danger is, of course, highly speculative. But the scenario runs something like this: Imagine a virtual pet that requires digital food pellets to stay alive. Say the pellets are free at first but then come at an escalating cost. If you don’t pay, unfortunately your increasingly beloved pet “dies.” Another scenario has the lonely user buying gifts for a virtual girlfriend.
Let me be clear: never in a million years do I think Breazeal would permit her creations to be used in this way. She has, as far as I can tell, devoted her life to putting humanity into human-machine interaction. Yet, is this true of everyone? If not, the issue is one bad actor away from a consumer protection problem. Moreover, imagine if Breazeal had no choice in the manner. Imagine a suspected criminal owned a Jibo. Law enforcement could, without question, obtain Jibo’s sensory date with sufficient process. But is this all? Could law enforcement, for instance, obtain a warrant to plant a question in order to check out an alibi? “I’m trying to update your calendar. Did you end up going to your mother’s Friday morning?” Could a hacker prompt Jibo to ask for password information not already in its files? These things are not necessarily in the developer’s control.
I said above that part of what makes Jibo so exciting is the prospect that anyone with the training can write software for it. Breazeal refers to Jibo as a “platform.” Brian David Johnson goes further in describing Intel’s Jimmy, encouraging consumers to think of the robot as a “smart phone with legs.” By design, Jimmy’s hardware and software are designed to be customizable by the end user.
This is great news. It means that we do not have to wait on Jibo or Intel to come up with every useful application for the robot. Indeed, as with personal computers, the true “killer app” could come from anyone or anywhere. Jonathan Zittrain’s bookThe Future of the Internet (And How To Stop It) describes this feature of open platforms particularly well. But, as I explain at length in an article (and much shorter Mashable op-ed), the prospect of open robotic platforms also raises thorny legal question it may take courts, states, or even Congress to address. Specifically, we do not know whom to hold responsible when an open robot running third-party software hurts the consumer or a friend.
Zibo does not directly confront the issue of physical harm because, perhaps wisely, Breazeal’s team has equipped the first generation with neither a gripper, nor the means to move about the room. The robot can swivel and rotate but remains fixed in place. Will this always be the case? And, if the platform is open, will someone provide Jibo with wheels—just as Romo gave wheels and a face to the iPhone? Time will tell.
This is hardly an exhaustive list. Were Jibo to used for therapeutic purposes, for instance, then the startup might have to contend with the Food and Drug Administration. But any mass market home robot will have to contend with at least these issues. It is a very exciting time to study robotics, in part because of the genius and perspiration of world-class roboticists like Cynthia Braezeal who have been clearing one hurdle after another for decades. It is our responsibility as a legal and policy community to help Breazeal and company clear the remaining ones.
“Very often we see sectors of the broader industry that are not computer science experts starting to integrate computers into their systems and then start to integrate networks into those systems,” said Kohno. “Because they don’t have experience being attacked by real attackers, like Microsoft and so on, their level of security awareness … appears to be dated.”
Our Faculty Directors Tadayoshi Kohno and Ryan Calo will be joining the expert panels for the Federal Trade Commission’s “Internet of Things” workshop on Tuesday, November 19, 2013, in Washington, DC.
Panel 3: Connected Cars This panel will look at the emergence of smart cars, exploring the different technologies involved with connected cars, including Event Data Recorders, head units, and telematics. Panelists will discuss data collection, closed versus open systems, and existing and potential privacy and security vulnerabilities.
Panel 4: Privacy and Security in a Connected World This panel will focus on the broader privacy and security issues raised by the Internet of Things. Topics that will be discussed include the extent to which the privacy and security issues raised by the Internet of Things are novel; how increasingly interconnected devices can manage notice and consent; best practices for managing privacy and security with new interconnected devices; and the incentives that exist for designing products with privacy and security in mind.