Blog

Tech Policy Lab Faculty Directors discussion “Responsible innovation: A cross-disciplinary lens on privacy and security challenges”

The Tech Policy Lab’s Faculty Co-Directors were the featured speakers for November’s installment of the 2015 Engineering Lecture Series where they discussed what it means to innovate responsibly, particularly with respect to privacy and security.

How do you interact and socialize? How do you conduct business? How do you raise your kids and care for the elderly? All these basic activities are being directly impacted by new technologies that are emerging at an incredible rate. While each new technology brings its own benefits and risks, regulations struggle to catch up.

Augmented Reality – Technology & Policy Primer

AR-Whitepaper_Cover

This whitepaper is aimed at identifying some of the major legal and policy issues augmented reality (AR) may present as a novel technology, and outlines some conditional recommendations to help address those issues. Our key findings include:

1. AR exists in a variety of configurations, but in general, AR is a mobile or embedded technology that senses, processes, and outputs data in real-time, recognizes and tracks real-world objects, and provides contextual information by supplementing or replacing human senses.

2. AR systems will raise legal and policy issues in roughly two categories: collection and display. Issues tend to include privacy, free speech, and intellectual property as well as novel forms of distraction and discrimination.

3. We recommend that policymakers—broadly defined—engage in diverse stakeholder analysis, threat modeling, and risk assessment processes. We recommend that they pay particular attention to: a) the fact that adversaries succeed when systems fail to anticipate behaviors; and that, b) not all stakeholders experience AR the same way.

4. Architectural/design decisions—such as whether AR systems are open or closed, whether data is ephemeral or stored, where data is processed, and so on—will each have policy consequences that vary by stakeholder.

How Information Asymmetry Helped Find Abducted Kids

Lab Co-Director Ryan Calo is featured in a Washington Post article describing how police used Spotify and other streaming services to located abducted kids in Mexico. Calo explains:

“This is a classic case of ‘information asymmetry,’ said University of Washington law professor Ryan Calo, meaning when companies, government agencies or police departments have more information about your online habits than you even realize is out there.

‘There’s an enormous underestimation of your digital footprint,’ Calo said. ‘You might not realize how much your data is being stored, but you also might not realize how many parties have access to it. Think about all the uses to which this information can be put.'”

Understanding Journalists Information Security Choices

This blog post, cross-posted from the Tow Center, describes recent work studying computer security in journalist-source communications, a collaboration between Susan McGregor at the Columbia Journalism School, UW HCI+D Masters students Polina Charters and Tobin Holliday, and TPL affiliated faculty member Franziska Roesner.

Understanding Journalists Information Security Choices

by Susan McGregor

In the roughly two years since the Snowden revelations, information security and source protection has become an ongoing focus of conferences, surveys, and how-to guides geared towards the journalism community. Yet despite chilling effects, targeted hacking, and the high-profile prosecution of sources, a Pew Research Center survey (conducted in association with the Tow Center) of investigative journalists released just a few months ago found that relatively few of them had changed their practices in light of these events.

On its surface, this seems counterintuitive. If journalists know that their communications and data may be under surveillance or the target of attack, why haven’t they adapted their practices to mitigate these risks? Surely both protecting and reassuring sources is crucial to building the kind of relationships on which essential journalism is based. Yet apart from select news organizations, strong information security is still seen as optional by many working journalists.

Eight months ago, my collaborators and I set out to explore why this might be, by learning more about how journalists collect, store and transmit information on a day-to-day basis. The full results of this study – based on in-depth interviews with institutional journalists at a range of news organizations on two continents – will be presented at USENIX Security in August, but the paper is already available for download here.

Many of our findings will not surprise industry professionals, yet present shared challenges faced by organizations and journalists across coverage areas and countries, suggesting opportunities for collaboration and additional development:

The infrastructure and overhead of many security-enhancing tools are incompatible with journalists’ and sources’ available technologies. Sources’ preferences tend to drive journalists’ use of a particular communication channel, and the most vulnerable sources may have limited or non-exclusive access to the accounts and devices required to use existing information security tools.  For example, some participants reported working with sources that owned only a feature phone or did not personally own a computer.

Journalists’ information security priorities are influenced by the resources and culture of their organization. Several of our study participants felt that they did not have anyone within their organization to ask about information security issues; of those that did, many referenced a colleague covering information security rather than a technical expert. Many study participants also lacked both the software to secure their communications and data (such as PGP), and the privileges to install such software on their work computers.

The risks, benefits and best applications of existing tools are poorly understood. Only one journalist in our study expressed concerns about the use of third-party communication and data storage tools, despite weak legal protections for the extensive data and metadata stored with them. Likewise, participants expressed skepticism about using anonymity-supporting platforms like SecureDrop, even though it can be used to conduct ongoing conversations between journalists and sources to verify submitted data.

Journalists have unaddressed information management needs. Many participants reported using third party and/or cloud based tools – often connected to personal accounts – to collect, organize and search story-related research, notes and other data. While these systems introduce vulnerabilities, they indicate an opportunity to create secure, journalism-oriented software solutions for note and data storage, organization, and retrieval.

Journalists tend to think of information security as an individual rather than a collective problem. Many of our participants said that they did not believe their work was likely to be the subject of either legal or technical targeting. Yet many participants also reported some sharing of resources with editors, proofreaders or collaborators, meaning an attack on a colleague could affect their work or vice versa.

While the results of this work suggest that there is still much to improve about journalists’ information security practices, it also highlights some distinct paths for future research, tool development and educational interventions, some of which are already in development. In addition, we are currently conducting research around the challenges to information security that journalistic outlets experience at an organizational level, and are actively seeking collaborators. If you are interested in learning how your organization can help with this work, please contact Susan McGregor.

Lab members research on Teleoperated Robots Featured by MIT Tech Review

Lab members Tamara Bonaci and Howard Chizeck’s work on the security of Teleoperated robots has recently been featured in a number of science news reports including MIT Tech, Popular Science, and Ars Technica.

“Tamara Bonaci and pals at the University of Washington in Seattle examine the special pitfalls associated with the communications technology involved in telesurgery. In particular, they show how a malicious attacker can disrupt the behavior of a telerobot during surgery and even take over such a robot, the first time a medical robot has been hacked in this way.”