Blog

Securing Augmented Reality Output

A year ago, Pokemon Go became immensely popular as players explored their surroundings for Pokemon in the smartphone-based augmented reality (AR) app. This hyper-popular game, which barely scratched the surface of AR’s potential, led to increased interest in the technology. The AR industry is expected to grow to $100 billion by 2020, and with increasing interest in AR automotive windshields and head-mounted displays (HMDs), we could soon be able to experience immersive AR environments like the one depicted by designer and film-maker Keiichi Matsuda in Hyper Reality.

Hyper Reality Screenshot
But what would happen if a pop-up ad covers your game, causing you to lose? Or if, while driving, an AR object obscures a pedestrian?

These are the types of situations researchers consider in a new paper, Securing Augmented Reality Output. In the paper, Lab student Kiron Lebeck, along with CSE undergraduate Kimberly Ruth, Lab Affiliate Faculty Franzi Roesner, and Lab Co-Director Yoshi Kohno address how to defend against buggy or malicious AR software that may unintentionally or inadvertently augment a user’s view of the world in undesirable or harmful ways. They ask, how can we enable the operating system of an AR platform to play a role in mitigating these kinds of risks? To address this issue, the team designed Arya, an AR platform that controls output through a designated policy framework, drawing policy conditions from a range of sources including the Microsoft HoloLens development guidelines and the National Highway Traffic and Safety Administration (NHTSA)’s driver distraction guidelines.

Arya Driving Scenario
By identifying specific “if-then” policy statements, this policy framework allows the Arya platform to apply a specific mechanism, or action, to virtual objects that violate a condition. In a simulated driving experience, for example, Arya makes transparent pop-up ads and notifications that could distract the driver by applying a specified action, in this case transparency, to objects that violate the specific policies:
• Don’t obscure pedestrians,
• Only allow ads to appear on billboards, and
• Don’t distract the user while driving.

By implementing Arya in a prototype AR operating system, the team was able to prevent undesirable behavior in case studies of three environments, including a simulated driving scenario. Additionally, performance overhead of policy enforcement is acceptable even in the un-optimized prototype. The team, among the first to raise AR output security issues, demonstrated the feasibility of implementing a policy framework to address AR output security risks, while also surfacing lessons and directions for future efforts in the AR security space.

To read more, see Securing Augmented Reality Output.

DNA Sequencing Tools Lack Robust Protections Against Cybersecurity Risks

Aug. 10, 2017

DNA sequencing tools lack robust protections against cybersecurity risks

In first, UW team infects computer using synthetic DNA molecules

DNASec Banner

Rapid improvement in DNA sequencing has sparked a proliferation of medical and genetic tests that promise to reveal everything from one’s ancestry to fitness levels to microorganisms that live in your gut.

A new study from University of Washington researchers that analyzed the security hygiene of common, open-source DNA processing programs finds evidence of poor computer security practices used throughout the field.

In the study, which will be presented Aug. 17 in Vancouver, B.C., at the 26th USENIX Security Symposium, the team also demonstrated for the first time that it is possible — though still challenging — to compromise a computer system with a malicious computer code stored in synthetic DNA. When that DNA is analyzed, the code can become executable malware that attacks the computer system running the software.

So far, the researchers stress, there’s no evidence of malicious attacks on DNA synthesizing, sequencing and processing services. But their analysis of software used throughout that pipeline found known security gaps that could allow unauthorized parties to gain control of computer systems — potentially giving them access to personal information or even the ability to manipulate DNA results.

“One of the big things we try to do in the computer security community is to avoid a situation where we say, ‘Oh shoot, adversaries are here and knocking on our door and we’re not prepared,’” said co-author Tadayoshi Kohno, professor at the UW’s Paul G. Allen School of Computer Science & Engineering.

“Instead, we’d rather say, ‘Hey, if you continue on your current trajectory, adversaries might show up in 10 years. So let’s start a conversation now about how to improve your security before it becomes an issue,’” said Kohno, whose previous research has provoked high-profile discussions about vulnerabilities in emerging technologies, such as internet-connected automobiles and implantable medical devices.

“We don’t want to alarm people or make patients worry about genetic testing, which can yield incredibly valuable information,” said co-author and Allen School associate professor Luis Ceze.  “We do want to give people a heads up that as these molecular and electronic worlds get closer together, there are potential interactions that we haven’t really had to contemplate before.”

In the new paper, researchers from the UW Security and Privacy Research Lab and UW Molecular Information Systems Lab offer recommendations to strengthen computer security and privacy protections in DNA synthesis, sequencing and processing.

Team UW DNA security website

The research team identified several different ways that a nefarious person could compromise a DNA sequencing and processing stream. To start, they demonstrated a technique that is scientifically fascinating — though arguably not the first thing an adversary might attempt, the researchers say.

“It remains to be seen how useful this would be, but we wondered whether under semi-realistic circumstances it would be possible to use biological molecules to infect a computer through normal DNA processing,” said co-author and Allen School doctoral student Peter Ney.

DNA is, at its heart, a system that encodes information in sequences of nucleotides. Through trial and error, the team found a way to include executable code — similar to computer worms that occasionally wreak havoc on the internet — in synthetic DNA strands.

To create optimal conditions for an adversary, they introduced a known security vulnerability into a software program that’s used to analyze and search for patterns in the raw files that emerge from DNA sequencing.

When that particular DNA strand is processed, the malicious exploit can gain control of the computer that’s running the program — potentially allowing the adversary to look at personal information, alter test results or even peer into a company’s intellectual property.

“To be clear, there are lots of challenges involved,” said co-author Lee Organick, a research scientist in the Molecular Information Systems Lab.  “Even if someone wanted to do this maliciously, it might not work. But we found it is possible.”

In what might prove to be a more target-rich area for an adversary to exploit, the research team also discovered known security gaps in many open-source software programs used to analyze DNA sequencing data.

Some were written in unsafe languages known to be vulnerable to attacks, in part because they were first crafted by small research groups who likely weren’t expecting much, if any, adversarial pressure. But as the cost of DNA sequencing has plummeted over the last decade, open-source programs have been adopted more widely in medical- and consumer-focused applications.

Researchers at the UW Molecular Information Systems Lab are working to create next-generation archival storage systems by encoding digital data in strands of synthetic DNA. Although their system relies on DNA sequencing, it does not suffer from the security vulnerabilities identified in the present research, in part because the MISL team has anticipated those issues and because their system doesn’t rely on typical bioinformatics tools.

Recommendations to address vulnerabilities elsewhere in the DNA sequencing industry include: following best practices for secure software, incorporating adversarial thinking when setting up processes, monitoring who has control of the physical DNA samples, verifying sources of DNA samples before they are processed and developing ways to detect malicious executable code in DNA.

“There is some really low-hanging fruit out there that people could address just by running standard software analysis tools that will point out security problems and recommend fixes,” said co-author Karl Koscher, a research scientist in the UW Security and Privacy Lab. “There are certain functions that are known to be risky to use, and there are ways to rewrite your programs to avoid using them. That would be a good initial step.”

The research was funded by the University of Washington Tech Policy Lab, the Short-Dooley Professorship and the Torode Family Professorship.

###

For more information, contact the research team at dnasec@cs.washington.edu.

Images available for download here: www.bitly.com/uwdnasec

Study available for download here: dnasec.cs.washington.edu

Diverse Voices: A How-To Guide for Creating More Inclusive Tech Policy Documents

Diverse Voices Blog Banner
Lassana Magassa | Meg Young | Batya Friedman

Developing Inclusive Tech Policy
All too often, policy development for emerging technology neglects under-represented populations. In response to this challenge, the UW Tech Policy Lab developed the Diverse Voices method in 2015. The method uses short, targeted conversations about emerging technology with “experiential experts” from under-represented groups to provide feedback on draft tech policy documents. This process works to increase the likelihood that the language in the finalized tech policy document addresses the perspectives and circumstances of broader groups of people – ideally averting injustice and exclusion. The Lab seeks to make this process available to any group wanting to improve a draft technology policy document in the following Guide: “Diverse Voices: A How-To Guide for Facilitating Inclusiveness in Tech Policy.”

The How-To Guide is now available for download here.

What is the Diverse Voices Method?
The Diverse Voices method is distinct from the process of writing a white paper. Rather, once a draft of a tech policy document exists, the method can be employed to integrate input from experiential experts before a final version of the document reaches policymakers.

Main steps in the method:
• Select a tech policy document
• Surface relevant under-represented groups
• Assemble a panel of experiential experts who represent those groups to examine and respond to the tech policy document
• Synthesize panel feedback
• Provide panel feedback to tech policy document authors

Practical by design, the Diverse Voices method seeks to improve the inclusivity of tech policy documents in a manner that is low cost—both to tech policy document authors and to the experiential experts who provide critical feedback on those documents. To be clear: the Diverse Voices method improves inclusivity but it does not claim to be fully representative or comprehensive of diverse perspectives. Rather, the method helps to identify some critical aspects in the tech policy document that could be improved and to provide suggestions for those improvements. In brief, the method offers progress—better tech policy documents—not perfection.

Acknowledgements
We thank the expert panelists who helped us test and refine the Diverse Voices method. We thank those who provided helpful input on drafts of this guide: Hannah Almeter, Stephanie Ballard, Ryan Calo, Sandy Kaplan, Nick Logler, Emily McReynolds, and Daisy Yoo. We also thank The William and Flora Hewlett Foundation and Microsoft for their ongoing support, including funding the creation of this guide.

We welcome your questions and comments on the How-To Guide and its underlying process. Please email us at: diversevoices@techpolicylab.org.

Driverless Seattle: How Cities Can Prepare for Automated Vehicles

Cover_TPL_Driverless-Seattle

Driverless Seattle: How Cities Can Plan for Automated Vehicles,” is a new report from the Tech Policy Lab at the University of Washington, put together in partnership with Challenge Seattle, a private sector initiative led by regional CEOs, and the Mobility Innovation Center at the University of Washington.

The advent of automated vehicles (AVs)—also known as driverless or self-driving cars—alters many assumptions about automotive travel. Foremost, of course, is the assumption that a vehicle requires a driver: a human occupant who controls the direction and speed of the vehicle, who is responsible for attentively monitoring the vehicle’s environment, and who is liable for most accidents involving the vehicle. By changing these and other fundamentals of transportation, AV technologies present opportunities but also challenges for policymakers across a wide range of legal and policy areas. To address these challenges, federal and state governments are already developing regulations and guidelines for AVs.

Seattle and other municipalities should also prepare for the introduction and adoption of these new technologies. To facilitate preparation for AVs at the municipal level, this whitepaper—the result of research conducted at the University of Washington’s interdisciplinary Tech Policy Lab—identifies the major legal and policy issues that Seattle and similar cities will need to consider in light of new AV technologies. Our key findings and recommendations include:

1. There is no single “self-driving car.” Instead, AVs vary in the extent to which they complement or replace human driving: AVs may automate particular driving functions (e.g., parallel parking), may navigate autonomously only in certain driving scenarios (e.g., on the freeway), or may allow the driver to switch in and out of autonomous mode at will. In some instances, a lead driver may control a platoon of connected vehicles without drivers. We recommend that policymakers recognize the variability in AV technology and employ terms—such as the Society of Automotive Engineer’s six-level AV taxonomy, discussed below—that accurately capture the benefits and constraints of particular AV models.

2. The AV regulatory environment is still developing. AVs are currently legal in Washington state, but AVs could be subject to a variety of new federal and state guidelines and regulations, and municipalities will need to be aware of these developments and the potential preemption of local action. However, municipalities possess their own, varied means by which to channel AVs, including government services powers, proprietary services powers, corporate powers, and police powers.

3. AVs raise legal and policy issues across several domains, including challenges to transportation planning, infrastructure development, municipal budgeting, insurance, and police and emergency services. Some of these challenges result from the extent to which existing laws and policies assume a particular configuration of automotive technology. Regulations that presume a human driver capable of managing the vehicle, for example, may limit the potential benefits of AVs for populations with special mobility constraints (e.g., those with disabilities). Other challenges will likely arise from new policies and procedures developed in response to AVs. For example, methods of revenue generation developed in response to AVs may inequitably shift revenue burdens onto drivers unable to afford an AV.

4. The adoption of AVs is likely to be a gradual and geographically uneven process. While some benefits of AVs are likely to be realized as soon as the vehicles reach the road (e.g., improvements to traffic safety) other potential benefits (e.g., reduced traffic congestion) may not be realized until AVs are dominant on a region’s roadways. Consequently, the transition from traditional vehicles to AVs will likely generate significant, staged policy challenges over time. We recommend that policymakers focus on planning for scenarios that involve both AVs and human-driven vehicles on roadways through at least 2050.

5. AV technologies and policies are likely to have significant impacts on stakeholder groups traditionally underrepresented in the policymaking process (e.g., socioeconomically disadvantaged communities), and will consequently raise challenges for social equity. We recommend that policymakers engage in diverse stakeholder analysis to assess not only the impacts of AVs, but also the impacts of proposed policy responses to AVs.

 

Driverless Seattle: How Cities Can Plan for Automated Vehicles

Cover_TPL_Driverless-Seattle

 Driverless Seattle: How the City Can Plan for Automated Vehicles

New Report from the University of Washington’s Tech Policy Lab and the Mobility Innovation Center Touts Need for Readiness, Tackles Costs and Benefits of Automated Vehicles

SEATTLE, Wash., Feb. 28, 2017—Automated vehicles (AVs) are coming to Seattle, and now is the time for government officials to prepare for them. So say the authors of “Driverless Seattle: How Cities Can Plan for Automated Vehicles,” a new report from the Tech Policy Lab at the University of Washington, put together in partnership with Challenge Seattle, a private sector initiative led by regional CEOs, and the Mobility Innovation Center at the University of Washington.

At their best, AVs promote traffic efficiency – especially important in Seattle, whose evening rush hour congestion is among the most in the nation. They reduce the number of vehicle crashes caused by human error, and mitigate human inefficiencies in the flow of traffic. They encourage ride-sharing rather than individual vehicle ownership. Moreover, they are already here: the Tesla Model S Autopilot system is available for purchase; the ride-share company Uber is testing AVs in Pittsburgh; and Google’s AV fleet has already driven nearly two million miles autonomously.

But how will Seattle integrate AVs more broadly into its complex transportation and legal landscapes? The key, claims Ryan Calo, a law professor and one of the report’s authors, is for the city to identify an AV strategy that will guide policymakers’ decision-making processes, and initiate coalition building with regional research institutions, public agencies, NGOs, and businesses. Seattle faces a difficult question. Will the city enthusiastically promote itself as an AV innovation hub? Or will we take a more hands-off approach? Alternatively, will we put strict limits on AV use until the technology has proved itself in other municipalities? And what does each option mean from a policy standpoint?

Deciding a course of action will enable local and regional officials to make consistent policy choices, and communicate those choices effectively. “Taking these steps now,” the authors argue, “will better position Seattle to continue to thrive in an eventual world of far greater automation in transportation.”

“Autonomous vehicles are going to fundamentally change transportation in Seattle, and we need to be ready for it,” said Christine Gregoire, CEO of Challenge Seattle. “This report offers a measured, research-based approach that will help Seattle prepare for a driverless future.”

“Autonomous vehicles are coming to cities, and in Seattle we’re planning today for how they will operate alongside all the other ways we get around,” said Scott Kubly, director of the Seattle Department of Transportation. “This report captures the big picture and provides a solid foundation for next steps on AV policy making and implementation.”

“Driverless Seattle” is the first product to come from the Mobility Innovation Center (MIC), which launched in March of 2016. A multi-disciplinary project housed at CoMotion at the University of Washington, the MIC brings together the Puget Sound region’s leading business, government, and academic sectors to use technology and innovation to find transportation solutions.

About the Tech Policy Lab

The Tech Policy Lab is a unique, interdisciplinary collaboration at the University of Washington that formally bridges three units: Computer Science and Engineering, the Information School, and the School of Law. Its mission is to help policymakers, broadly defined, make wise and inclusive technology policy.

About the Mobility Innovation Center

The University of Washington and Challenge Seattle are committed to advancing our region’s economy and quality of life by helping to build the transportation system of the future. Together, they have partnered to create a multi-disciplinary Mobility Innovation Center.  Housed at CoMotion at the University of Washington, the Center brings together the region’s leading expertise from the business, government, and academic sectors to tackle specific transportation challenges, using applied research and experimentation. Cross-sector teams will attack regional mobility problems, develop new technologies, apply system-level thinking, and bring new innovations to our regional transportation system.

CoMotion is the collaborative innovation hub dedicated to expanding the societal impact of the UW community.

About Challenge Seattle

Challenge Seattle is a private sector initiative led by many of the region’s CEOs working to address the issues that will determine the future of our region – for our economy and our families. Building on our region’s history, they are focused on taking on the challenges that must be addressed to ensure our region continues to grow, transform, and thrive, while maintaining our quality of life.

Contact:

Melissa Englund
Marketing & Communications
UW School of Law
p: 206.685.7394
e: menglund@uw.edu

Donna O’Neill
Marketing & Communications
CoMotion at University of Washington
p: 206.685.9972
e: donnao3@uw.edu