More on Robotics

To Make a Robot Secure: An Experimental Analysis of Cyber Security Threats Against Teleoperated Surgical Robots

Teleoperated robots are playing an increasingly important role in military actions and medical services. In the future, remotely operated surgical robots will likely be used in more scenarios such as battlefields and emergency response. But rapidly growing applications of teleoperated surgery raise the question; what if the computer systems for these robots are attacked, taken over and even turned into weapons? Our work seeks to answer this question by systematically analyzing possible cyber security attacks against Raven II, an advanced teleoperated robotic surgery system. We identify a slew of possible cyber security threats, and experimentally evaluate their scopes and impacts. We demonstrate the ability to maliciously control a wide range of robots functions, and even to completely ignore or override command inputs from the surgeon. We further find that it is possible to abuse the robot’s existing emergency stop (E-stop) mechanism to execute efficient (single packet) attacks. We then consider steps to mitigate these identified attacks, and experimentally evaluate the feasibility of applying the existing security solutions against these threats. The broader goal of our paper, however, is to raise awareness and increase understanding of these emerging threats. We anticipate that the majority of attacks against telerobotic surgery will also be relevant to other teleoperated robotic and co-robotic systems.

Guest Post: UW Law School’s Technology Law and Policy Clinic: Autonomous-Vehicle Regulation – What Can and Should the States Regulate?

by Ashleigh Rhodes, Brooks Lindsay, Don Wang

As autonomous vehicles drive from fantasy to reality (and they’ve almost arrived), rules and regulations are needed to ensure this new technology is safely integrated into the country’s current transportation infrastructure. States have been wondering what they can regulate and what the federal government will preemptively regulate. In other words, what types of state autonomous-vehicle provisions will the federal government preempt (or invalidate) based on the Supremacy Clause, which holds that federal laws trup where they conflict with state laws? And, vice versa, what state provisions will survive after the federal government passes permanent AV regulations? The Uniform Law Commission (ULC) asked the University of Washington School of Law’s Technology Law and Policy Clinic to attempt to answer these questions. Below is a summary of our findings from our November, 2014 report to the ULC, The Risks of Federal Preemption of State Autonomous Vehicle Regulations. We are also working on detailed provisions recommendations to the ULC and draft legislation for Washington, all which heavily rely on our preemption conclusions below (stay tuned for these products and a blog post to follow).

Summary of Findings

The National Highway Traffic Safety Administration’s (NHTSA) statutory mandate to establish vehicle safety standards will likely preempt any safety regulations states adopt for autonomous vehicles, but states can expect to have authority in verifying the continued safe operation of used vehicles with after-market autonomous modifications. Furthermore, in its 2013 Preliminary Statement of Policy Concerning Automated Vehicles (Preliminary Statement), the NHTSA encouraged states to legislate and regulate in the areas of licensing, permitting, testing, and test-driver training as well as determine conditions for the operation of specific types of autonomous vehicles. The NHTSA could preempt state tort law if it conflicts with a significant regulatory objective, but it has shown little will to do so. Lastly, it is very important to note that federal authority to preempt certain state provisions does not necessarily diminish those provisions’ worth; they may provide critical interim value to states. On the other hand, such interim provisions may require a lot of work and political will for fleeting gains. The preemption question, therefore, is only the starting point for a larger judgment call by state legislators.


The NHTSA was established in 1966 when Congress enacted the National Traffic and Motor Vehicle Safety Act (Safety Act), which sets out that the purpose and policy of the NHTSA is to “reduce traffic accidents and deaths and injuries resulting from traffic accidents.” To achieve this purpose, the NHTSA has the authority “to prescribe motor vehicle safety standards for motor vehicles and motor vehicle equipment in interstate commerce; and to carry out needed safety research and development.” In addition, the preemption provision of the Safety Act expressly states that when a federal standard is in effect, a state may only regulate the same aspect if its standard is identical to the federal standard.

A large portion of the Preliminary Statement is devoted to the NHTSA’s “Research Plan for Automated Vehicles.” Due to broad statutory definitions, all motor vehicle equipment is covered regardless of the type of technology used. Therefore, when the NHTSA completes its research in at least three to four years, it is likely to issue safety standards for vehicles originally manufactured as autonomous along with the individual equipment pieces that give the vehicle its autonomous capabilities.

Despite the NHTSA’s authority to establish guidelines applicable to after-market equipment, that authority diminishes after the first sale. Thus, it will likely continue to work with states to conduct inspections to ensure functionality in used vehicles of basic safety equipment and after-market autonomous modifications. Since it will take the NHTSA some time to complete its research into safety standards applicable to autonomous technology, its Preliminary Statement suggests basic interim principles for state laws. These include facilitating the “safe, simple, and timely” transition from self-driving mode to driver control and establishing data recording requirements to ensure safe operation of autonomous vehicles.

The Preliminary Statement specifically “recommended [eight broad] principles that States may wish to apply as part of their considerations for driverless vehicle operation, especially with respect to testing and licensing.” States can expect to fully control the permitting for test cars and drivers and the requirements for test-driver training programs. However, the NHTSA advised against states allowing autonomous vehicle operation for purposes other than testing. When autonomous vehicles have reached the level of sophistication needed for general driving purposes, states can expect to exert considerable control over long-term licensing or endorsement for consumer drivers, similar to the current scope of state highway-safety programs.

Notwithstanding the preemption provision, the Safety Act contains a common law liability clause, which stipulates that compliance with a federal standard created in accordance with the Safety Act “does not exempt a person from liability at common law [based on precedent established by previous state court rulings].” Supreme Court precedent established that this clause is only limited to express preemptions, but it does not prohibit conflict preemptions. In Geier v. Am. Honda Motor Co., the Court concluded that a state tort claim was preempted because it conflicted with the regulatory intention to provide manufacturers with options. But eleven years later in Williamson v. Mazda Motor of Am., Inc., the Court determined that state tort claims were only preempted if giving the manufacturer a choice was a “significant regulatory objective.” The Williamson decision gave lower courts guidelines to decide whether the “significant regulatory objective” standard is met, which includes reviewing the history of the regulation, the agency’s view of the regulation’s objective at the time it was promulgated, and the agency’s view at the time of litigation on the regulation’s preemptive effect. The agency’s views of the regulation can also be influenced by the Administration’s preemption philosophy.

Until the NHTSA completes its research, states can rely on the Preliminary Statement to enact testing legislation similar to that enacted in California, Florida, Michigan, Nevada, and the District of Columbia. Furthermore, states that want to legislate safety standards for salable autonomous vehicles can follow Nevada’s model, which included the enactment of interim safety provisions that it assumes will be preempted by the NHTSA’s final rules. These rules are expected to take at least three to four years to complete, so states should plan accordingly.

You can see our full report to the ULC here. We are also finalizing a report with detailed provisions recommendations to the ULC as well as draft legislation for Washington state, so stay tuned for those products and an accompanying blog post.

Tech Policy Lab Distinguished Lecture: Responsible Innovation in the Age of Robots & Smart Machines

Many of the things we do to each other in the 21st century –both good and bad – we do by means of smart technology. Drones, robots, cars, and computers are a case in point. Military drones can help protect vulnerable, displaced civilians; at the same time, drones that do so without clear accountability give rise to serious moral questions when unintended deaths and harms occur. More generally, the social benefits of our smart machines are manifold, the potential drawbacks and moral quandaries extremely challenging. In this talk, I take up the question of responsible innovation drawing on the European Union experience, value sensitive design, and reconsidering the relations between ethics and design.

Jeroen van den Hoven is a professor of Ethics and Technology at Delft University of Technology. He was the first scientific director for 3TU/Ethics and is currently editor-in-chief of Ethics and Information Technology. In 2009 he won both the World Technology Award for Ethics and the IFIP prize for ICT and Society for his work on ethics and ICT.

Distinguished Lecture: Responsible Innovation in the Age of Robots & Smart Machines

Many of the things we do to each other in the 21st century –both good and bad – we do by means of smart technology. Drones, robots, cars, and computers are a case in point. Military drones can help protect vulnerable, displaced civilians; at the same time, drones that do so without clear accountability give rise to serious moral questions when unintended deaths and harms occur. More generally, the social benefits of our smart machines are manifold; the potential drawbacks and moral quandaries extremely challenging. In this talk, I take up the question of responsible innovation drawing on the European Union experience and reconsidering the relations between ethics and design. I shall introduce ‘Value Sensitive Design’, one the most promising approaches, and provide illustrations from robotics, AI and drone technology to show how moral values can be used as requirements in technical design. By doing so we may overcome problems of moral overload and conflicting values by design.

Jeroen van den Hoven is full professor of Ethics and Technology at Delft University of Technology, he is editor in chief of Ethics and Information Technology. He was the first scientific director of 3TU.Ethics (2007-2013). He won the World Technology Award for Ethics in 2009 and the IFIP prize for ICT and Society also in 2009 for his work in Ethics and ICT.

Announcing the We Robot 2015 Call for Papers

The 2015 We Robot Call for Papers is now available. Inviting submissions for the fourth annual robotics law and policy conference, We Robot 2015 will be held in Seattle, Washington on April 10-11, 2015 at the University of Washington School of Law. We Robot has been hosted twice at the University of Miami School of Law and once at Stanford Law School. The conference web site is at

We Robot 2015 seeks contributions by academics, practitioners, and others in the form of scholarly papers or demonstrations of technology or other projects. We Robot fosters conversations between the people designing, building, and deploying robots, and the people who design or influence the legal and social structures in which robots will operate. We particularly encourage contributions resulting from interdisciplinary collaborations, such as those between legal, ethical, or policy scholars and roboticists.

This conference will build on existing scholarship that explores how the increasing sophistication and autonomous decision-making capabilities of robots and their widespread deployment everywhere from the home, to hospitals, to public spaces, to the battlefield disrupts existing legal regimes or requires rethinking of various policy issues. We are particularly interested this year in “solutions,” i.e., projects with a normative or practical thesis aimed at helping to resolve issues around contemporary and anticipated robotic applications.

Scholarly Papers
Topics of interest for the scholarly paper portion of the conference include but are not limited to:

  • The impact of artificial intelligence on civil liberties, including sexuality, equal protection, privacy, suffrage, and procreation.
  • Comparative perspectives on the regulation of robotic technologies.
  • Assessment of what institutional configurations, if any, would best serve to integrate robotics into society responsibly.
  • Deployment of autonomous weapons in the military or law enforcement contexts.
  • Law and economic perspectives on robotics.

These are only some examples of relevant topics. We are very interested in papers on other topics driven by actual or probable robot deployments. The purpose of this conference is to help set a research agenda relating to the deployment of robots in society, to inform policy-makers of the issues, and to help design legal rules that will maximize opportunities and minimize risks arising from the increased deployment of robots in society.

We also invite expressions of interest from potential discussants. Every paper accepted will be assigned a discussant whose job it will be to present and comment on the paper. These presentations will be very brief (no more than 10 minutes) and will consist mostly of making a few points critiquing the author’s paper to kick off the conversation. Authors will then respond briefly (no more than 5 minutes). The rest of the session will consist of a group discussion about the paper moderated by the discussant.

Unlike the scholarly papers, proposals for demonstrations may be purely descriptive and designer/builders will be asked to present their work themselves. We’d like to hear about your latest innovations—and what’s on the drawing board for the next generations of robots as well, or about legal and policy issues you have encountered in the design or deploy process.

How to Submit Your Proposal
Please send a 1-3 page abstract outlining your proposed paper, and a c.v. of the author(s) to

  • Paper proposals accepted starting Oct. 1, 2014. See for further information.
  • Call for papers closes Nov 3, 2014.
  • Responses by Dec. 14, 2014.
  • Full papers due by March 23, 2015. They will be posted on line at the conference web site unless otherwise agreed by participants.

We anticipate paying reasonable round-trip domestic coach airfare and providing hotel accommodation for presenters and discussants.