Blog

Understanding Journalists Information Security Choices

This blog post, cross-posted from the Tow Center, describes recent work studying computer security in journalist-source communications, a collaboration between Susan McGregor at the Columbia Journalism School, UW HCI+D Masters students Polina Charters and Tobin Holliday, and TPL affiliated faculty member Franziska Roesner.

Understanding Journalists Information Security Choices

by Susan McGregor

In the roughly two years since the Snowden revelations, information security and source protection has become an ongoing focus of conferences, surveys, and how-to guides geared towards the journalism community. Yet despite chilling effects, targeted hacking, and the high-profile prosecution of sources, a Pew Research Center survey (conducted in association with the Tow Center) of investigative journalists released just a few months ago found that relatively few of them had changed their practices in light of these events.

On its surface, this seems counterintuitive. If journalists know that their communications and data may be under surveillance or the target of attack, why haven’t they adapted their practices to mitigate these risks? Surely both protecting and reassuring sources is crucial to building the kind of relationships on which essential journalism is based. Yet apart from select news organizations, strong information security is still seen as optional by many working journalists.

Eight months ago, my collaborators and I set out to explore why this might be, by learning more about how journalists collect, store and transmit information on a day-to-day basis. The full results of this study – based on in-depth interviews with institutional journalists at a range of news organizations on two continents – will be presented at USENIX Security in August, but the paper is already available for download here.

Many of our findings will not surprise industry professionals, yet present shared challenges faced by organizations and journalists across coverage areas and countries, suggesting opportunities for collaboration and additional development:

The infrastructure and overhead of many security-enhancing tools are incompatible with journalists’ and sources’ available technologies. Sources’ preferences tend to drive journalists’ use of a particular communication channel, and the most vulnerable sources may have limited or non-exclusive access to the accounts and devices required to use existing information security tools.  For example, some participants reported working with sources that owned only a feature phone or did not personally own a computer.

Journalists’ information security priorities are influenced by the resources and culture of their organization. Several of our study participants felt that they did not have anyone within their organization to ask about information security issues; of those that did, many referenced a colleague covering information security rather than a technical expert. Many study participants also lacked both the software to secure their communications and data (such as PGP), and the privileges to install such software on their work computers.

The risks, benefits and best applications of existing tools are poorly understood. Only one journalist in our study expressed concerns about the use of third-party communication and data storage tools, despite weak legal protections for the extensive data and metadata stored with them. Likewise, participants expressed skepticism about using anonymity-supporting platforms like SecureDrop, even though it can be used to conduct ongoing conversations between journalists and sources to verify submitted data.

Journalists have unaddressed information management needs. Many participants reported using third party and/or cloud based tools – often connected to personal accounts – to collect, organize and search story-related research, notes and other data. While these systems introduce vulnerabilities, they indicate an opportunity to create secure, journalism-oriented software solutions for note and data storage, organization, and retrieval.

Journalists tend to think of information security as an individual rather than a collective problem. Many of our participants said that they did not believe their work was likely to be the subject of either legal or technical targeting. Yet many participants also reported some sharing of resources with editors, proofreaders or collaborators, meaning an attack on a colleague could affect their work or vice versa.

While the results of this work suggest that there is still much to improve about journalists’ information security practices, it also highlights some distinct paths for future research, tool development and educational interventions, some of which are already in development. In addition, we are currently conducting research around the challenges to information security that journalistic outlets experience at an organizational level, and are actively seeking collaborators. If you are interested in learning how your organization can help with this work, please contact Susan McGregor.

Lab members research on Teleoperated Robots Featured by MIT Tech Review

Lab members Tamara Bonaci and Howard Chizeck’s work on the security of Teleoperated robots has recently been featured in a number of science news reports including MIT Tech, Popular Science, and Ars Technica.

“Tamara Bonaci and pals at the University of Washington in Seattle examine the special pitfalls associated with the communications technology involved in telesurgery. In particular, they show how a malicious attacker can disrupt the behavior of a telerobot during surgery and even take over such a robot, the first time a medical robot has been hacked in this way.”

AccommodatingTechnology – 25 years after the Americans with Disabilities Act

redsquare2

May 29, 2015
Kane Hall 225 (Walker-Ames Room)
Friday, May 29
1:00 pm – 4:00 pm

This year marks the 25th Anniversary of the signing of the Americans with Disabilities Act (ADA). While there have been incredible advances in technology over the past quarter century, new technologies also regularly surface issues of accessibility. Join the University of Washington’s Tech Policy Lab for an afternoon roundtable where we will discuss current accessibility efforts, new technologies’ accessibility, and individual choice in the use of assistive technologies. We plan to explore topics such as: how emerging technologies like augmented reality can be assistive as well as present challenges for accessibility; efforts to crowdsource location accessibility information; and the cultural implications of assistive technologies that individuals may not wish to use, like neuroprosthetics and robotic augmentation.

How Technology Impacts Civil Liberties with Co-Director Ryan Calo

DataPrivacyWords

Newly-emerging technologies affect us all in a multitude of ways and today’s turned-on, always-connected world has reached an all-time high. O’Connor, president of the Center for Democracy & Technology, will discuss how the internet and interconnected world shape our lives, impact our civil liberties, and inform our daily decisions. Other panelists include Ryan Calo, faculty director of the University of Washington’s Tech Policy Lab; Racquel Russell, Director of Government Relations and Public Affairs for Zillow; and Matt Wood, the General Manager of Product Strategy for Amazon Web Services. From the Internet of Things to the wireless technologies in automobiles, the panelists will explain the range of this digital world and what steps can be made to stay plugged in, while still maintaining personal privacy and security. The panel will be moderated by Jenny Durkan, former United States Attorney and Quinn Emanuel’s Global Chair of the Cyber Law and Privacy Group. Ira Rubinstein, CDT Board of Directors, Research Fellow and Adjunct Professor at NYU School of Law will be giving a special welcome to the program.

Watch We Robot 2015

werobot-poster

Not able to make it to We Robot 2015? Want to watch your favorite panel again? Below are links to all of the talks that made We Robot 2015 great.

WeRobot 2015 Keynote: An Evening with Tony Dyson
Tony Dyson, noted roboticist and special effects model-maker, and the builder of R2D2, discusses the future of robotics with Professor Ryan Calo of the University of Washington School of Law.

Friday, April 10

WeRobot 2015 Panel 1: “Who’s Johnny? (Anthropomorphizing Robots)”
Author: Kate Darling
Discussant: Ken Goldberg
Paper: http://bit.ly/1bxvbfR

As we increasingly create spaces where robotic technology interacts with humans, our tendency to project lifelike qualities onto robots raises questions around use and policy. Based on a human-robot-interaction experiment conducted in our lab, this paper explores the effects of anthropomorphic framing in the introduction of robotic technology. It discusses concerns about anthropomorphism in certain contexts, but argues that there are also cases where encouraging anthropomorphism is desirable. Because people respond to framing, framing could serve as a tool to separate these cases.

WeRobot 2015 Panel 2: “Robot Passports”
Author: Anupam Chander
Discussant: Ann Bartow
Paper: http://bit.ly/1QBX2fp

Can international trade law, which after all seeks to liberalize trade in both goods and services, help stave off attempts to erect border barriers to this new type of trade? The smart objects of the 21st century consist of both goods and information services, and thus are subject to multiple means of government protectionism, but also trade liberalization. This paper is the first effort to locate and analyze the Internet of Things and modern robotics within the international trade framework.

WeRobot 2015 Panel 3: “Robotics Governance”
Peter Asaro of the New School, Jason Millar of Queen’s University, Kristen Thomasen of the University of Ottawa, and David Post of the New America Foundation discuss the challenges facing governance and regulation of emerging robotic technologies.

Peter Asaro: “Regulating Robots: A Multi-Scale Approach to Developing Robot Policy and Technology” http://bit.ly/1H1fZE8

Jason Millar, “Sketching an Ethics Evaluation Tool for Robot Design and Governance” http://bit.ly/1Fsqro4
Kristen Thomasen, “Driving Lessons: Learning from the History of Automobile Regulation to Legislate Better Drones” http://bit.ly/1DQQrnx

WeRobot 2015 Panel 4: “Regulating Healthcare Robots”
Authors: Drew Simshaw, Nicolas Terry, Kris Hauser, M.L. Cumming
Discussant:  Cindy Jacobs
Paper: http://bit.ly/1Csn4s0

There are basic, pressing issues that need to be addressed in the nearer future in order to ensure that robots are able to maintain sustainable innovation with the confidence of providers, patients, consumers, and investors. We will only be able to maximize the potential of robots in healthcare through responsible design, deployment, and use, which must include taking into consideration potential issues that could, if overlooked, manifest themselves in ways that harm patients and consumers, diminish the trust of key stakeholders of robots in healthcare, and stifle long-term innovation by resulting in overly restrictive reactionary regulation. In this paper, we focus on the issues of patient and user safety, security, and privacy, and specifically the effect of medical device regulation and data protection laws on robots in healthcare.

WeRobot 2015 Panel 5: “Law and Ethics of Telepresence Robots”
Authors: J. Nathan Matias, Chelsea Barabas, Chris Bavitz, Cecillia Xie, Jack Xu
Discussant: Laurel Riek
Paper: http://bit.ly/1KoSy7n

The deployment of telepresence robots creates enormous possibilities for enhanced long-distance interactions, educational opportunities, and bridging of social and cultural gaps. The use of telepresence robots raises some legal and ethical issues, however. This proposal outlines the development of a law and ethics toolkit directed to those who operate and allow others to operate telepresence robots, describing some of the potential legal ethical issues that arise from their use and offering proposed responses and means of addressing and allocating risk.

Saturday, April 11

WeRobot 2015 Panel 6: “Unfair and Deceptive Robots”
Author: Woodrow Hartzog
Discussant: Ryan Calo
Paper: http://bit.ly/1KoSy7E

What should consumer protection rules for robots look like? The FTC’s grant of authority and existing jurisprudence make it the preferable regulatory agency for protecting consumers who buy and interact with robots. The FTC has proven to be a capable regulator of communications, organizational procedures, and design, which are the three crucial concepts for safe consumer robots. Additionally, the structure and history of the FTC shows that the agency is capable of fostering new technologies as it did with the Internet. The agency defers to industry standards, avoids dramatic regulatory lurches, and cooperates with other agencies. Consumer robotics is an expansive field with great potential. A light but steady response by the FTC will allow the consumer robotics industry to thrive while preserving consumer trust and keeping consumers safe from harm.

WeRobot 2015 Panel 7: “The Presentation of the Machine in Everyday Life”
Authors: Karen Levy & Tim Hwang
Discussant: Evan Selinger
Paper: http://bit.ly/1bMxso7

As policy concerns around intelligent and autonomous systems come to focus increasingly on transparency and usability, the time is ripe for an inquiry into the theater of autonomous systems. When do (and when should) law and policy explicitly regulate the optics of autonomous systems (for instance, requiring electric vehicle engines to “rev” audibly for safety reasons) as opposed to their actual capabilities? What are the benefits and dangers of doing so? What economic and social pressures compel a focus on system theater, and what are the ethical and policy implications of such a focus?

WeRobot 2015 Panel 8: “Operator Signatures for Teleoperated Robots”
Authors: Tamara Bonaci, Aaron Alva, Jeffrey Herron, Ryan Calo, Howard Chizeck
Discussant: Margot Kaminski
Paper: http://bit.ly/1GBw1Wz

This paper discusses legal liability and evidentiary issues that operator signatures could occasion or help to resolve. We first provide a background of teleoperated robotic systems, and introduce the concept of operator signatures. We then discuss some cyber-security risks that may arise during teleoperated procedures, and describe the three main task operator signatures seek to address—identification, authentication, and real-time monitoring. Third, we discuss legal issues that arise for each of these tasks. We discuss what legal problems operator signatures help mitigate. We then focus on liability concerns that may arise when operator signatures are used as a part of a real-time monitoring and alert tool. We consider the various scenarios where actions are conducted on the basis of an operator signature alert. Finally, we provide preliminary guidance on how to balance the need to mitigate cyber-security risks with the desire to enable adoption of teleoperation.

WeRobot 2015 Panel 9: “Robot Economics”
Colin Lewis, Andra Keay, Garry Mathiason, and Dan Siciliano discuss a variety of economic benefits, consequences, and externalities posed by the widespread integration of robots into 21st century society. Much of the discussion focuses on the potential impact of robots on employment and labor markets.

WeRobot 2015 Panel 10: “Personal Responsibility and Neuroprosthetics”

Authors: Patrick Moore, Timothy Brown, Jeffrey Herron, Margaret Thompson, Tamara Bonaci, Sara Goering, Howard Chizeck
Discussant: Meg Leta Jones
Paper: http://bit.ly/1J4Mtfp

This paper investigates whether giving users volitional control over therapeutic brain implants is ethically and legally permissible. We believe that it is not only permissible—it is in fact advantageous when compared to the alternative of making such systems’ operation entirely automatic. From an ethical perspective, volitional control maintains the integrity of the self by allowing the user to view the technology as restoring, preserving, or enhancing one’s abilities without the fear of losing control over one’s own humanity. This preservation of self- integrity carries into the legal realm, where giving users control of the system keeps responsibility for the consequences of its use in human hands.