Blog

Privacy in Online Dating

How do you manage your privacy in online dating? Chances are that if you use online dating or have considered using it, this is an issue you’ve given some thought. And you wouldn’t be alone, as privacy issues in online dating have appeared in the media—two summers ago, during the Rio Olympics, privacy in online dating made headlines when a Tinder user posted screenshots of Olympian’s profiles on social media, and a journalist collected identifying information about closeted gay Olympians through Grindr. In September, a journalist requested her personal data from Tinder and received 800+ pages including information about her Facebook and Instagram activity. And more recently, researchers have revealed security vulnerabilities in a number of online dating apps, including ways that users may be vulnerable due to sensitive information they disclose on the site.

These events and others show that individual users can’t control all privacy-related risks when using online dating. To understand how users reason about privacy risks they can potentially control through decision making, Lab Ph.D. student Camille Cobb and Lab Faculty Co-Director Yoshi Kohno studied online dating user’s perceptions about and actions governing their privacy in “How Public is My Private Life? Privacy in Online Dating.” The researchers surveyed 100 participants about how they handle their own and other users’ privacy; then, based on themes raised in survey responses, they conducted follow-up interviews and analyzed a sample of Tinder profiles.

OnlineDatingImage

Based on their survey of 100 online dating users, and interviews with 14 of those, the researchers found that when choosing profile content, looking people up, and taking screenshots of messages or profiles, users may face complex tradeoffs between preserving their own or others’ privacy and other goals. Users described balancing privacy considerations including the risk of feeling awkward, screenshots and data breaches, stalking, and their profile being seen by a friend or co-worker, with goals like getting successful matches, preserving information that may be sentimental if the match is successful, safety, and avoiding scams.

These tradeoffs are complex, and involve user’s privacy decisions beyond just the dating app. For example, a user concerned about privacy on social media like Facebook might change their name to something unusual or unique and hard to guess, but if a dating service pulls that name into a user’s profile, that unique name makes the user easier to find outside of the dating app. Beyond the name they used, users also experienced tradeoffs around the amount of information to include in their profile. Including more information could make users more easily searchable outside of the dating app, while not including any identifiable information could run the risk, in one user’s case, of being thought to be a bot.

Focusing on users’ concerns about “searchibility,” or the risk of being identified elsewhere online, the researchers analyzed 400 Tinder profiles. Using techniques readily available and fairly easy for any Tinder user to use, event without technical knowledge, the researchers were able to find 47% of the users. And having an account directly linked to another account, or mentioning a username for another account in the profile, increased the chance of being found to 80%. These results support concerns suggested in the survey; the researchers were able to find a larger portion of people with unique names, echoing a survey respondent’s concern that having a unique name would make her more identifiable.

Discussing the privacy considerations & tradeoffs that users described experiencing, and in light of their analysis of profiles’ searchability, the researchers suggest a number of avenues to explore that could help online dating users make decisions around privacy. These could include restricting the number of screenshots a user can take per day, allowing users to disallow remove matches, and, more broadly, implementing privacy awareness campaigns for users.

This paper was presented at the 26th International World-Wide Web Conference and is available here.

Tech Policy Lab Joins Partnership on Artificial Intelligence

The Tech Policy Lab is delighted to be joining the Partnership on AI to Benefit People and Society, a non-profit organization charged with exploring and developing best practices for AI. The Lab, which aims to position policymakers, broadly defined, to make wiser and more inclusive tech policy, joins a diverse range of voices from academia, industry and non-profit organizations committed to collaboration and open dialogue on the opportunities and rising challenges around AI.

PAI Logo_Large_2

The Lab has worked to advance AI in the public interest since our inception, through conferences, workshops, and research, among other initiatives.  In 2015, we organized the fourth annual robotics law and policy conference, WeRobot. And in 2016, we co-organized the Obama White House’s inaugural public workshop on AI, focusing on legal and governance implications of AI.  Our research focuses on the policy implications of AI and includes studying AI-connected devices in the home.

We are planning many more research initiatives around AI, including AI-assisted decision-making, AI and cybersecurity, and AI and diversity. We will bring to our AI research our commitment to the inclusion of diverse perspectives in tech policy research and outcomes, including our Diverse Voices method (made available earlier this year through our How-To Guide) which engages diverse panels of “experiential” experts in short, targeted conversations around a technology to improve inclusivity in tech policy outcomes.

The Partnership on AI will be a great network and resource as we undertake this work. We look forward to collaborating with a diverse group of stakeholders from industry, academia, and policy around the Partnership on AI’s goals: to develop and share best practices, advance public understanding of AI, create a diverse network of experts around AI, and examine AI’s impact on people and society.

About the Tech Policy Lab

The Tech Policy Lab is a unique, interdisciplinary research unit at the University of Washington. The Lab’s mission is to position policymakers, broadly defined, to make wiser and more inclusive tech policy.  Situated within a globally renowned research university, the Tech Policy Lab is committed to advancing artificial intelligence in the public interest through research, analysis, and education and outreach. To learn more about the Lab’s cutting edge research, thought leadership, and education initiatives, go to www.techpolicylab.uw.edu.

About the Partnership on AI

The Partnership on AI to Benefit People and Society (Partnership on AI) is a not-for-profit organization, founded by Amazon, Apple, Google/DeepMind, Facebook, IBM and Microsoft.  Our goals are to study and formulate best practices on the development, testing, and fielding of AI technologies, advancing the public’s understanding of AI, to serve as an open platform for discussion and engagement about AI and its influences on people and society and identify and foster aspirational efforts in AI for socially beneficial purposes. We actively designed the Partnership on AI to bring together a diverse range of voices from for-profit and non-profit, all of whom share our belief in the tenets and are committed to collaboration and open dialogue on the many opportunities and rising challenges around AI. For the full list of founding members and partners, go to https://www.partnershiponai.org/partners/.

Exploring ADINT: Using Ad Targeting for Surveillance on a Budget

New research by former CSE Ph.D. student Paul Vines, Lab Faculty Associate Franzi Roesner, and Faculty Co-Director Yoshi Kohno demonstrates how targeted advertising can be used for personal surveillance.

From “Exploring ADINT: Using Ad Targeting for Surveillance on a Budget – or – How Alice Can Buy Ads to Track Bob

The online advertising ecosystem is built upon the ability of advertising networks to know properties about users (e.g., their interests or physical locations) and deliver targeted ads based on those properties. Much of the privacy debate around online advertising has focused on the harvesting of these properties by the advertising networks. In this work, we explore the following question: can third-parties use the purchasing of ads to extract private information about individuals? We find that the answer is yes. For example, in a case study with an archetypal advertising network, we find that — for $1000 USD — we can track the location of individuals who are using apps served by that advertising network, as well as infer whether they are using potentially sensitive applications (e.g., certain religious or sexuality-related apps). We also conduct a broad survey of other ad networks and assess their risks to similar attacks. We then step back and explore the implications of our findings.

The Tech Policy Lab plans to work with the ADINT research team to explore the policy implications of this research, examining potential recommendations for issues raised by this new form of personal surveillance.

More information can be found on the team’s website, and the UW News and UW CSE releases. The paper will be presented at ACM’s Workshop on Privacy in the Electronic Society later this month and can be found here.

Opportunity: Postdoctoral Research Associate in Value Sensitive Design

Postdoctoral Research Associate in Value Sensitive Design

The UW Tech Policy Lab seeks a Postdoctoral Researcher to bring a Value Sensitive Design (VSD) perspective to work on tech policy under the supervision of Lab co-director Professor Batya Friedman of the Information School. This position will be funded for two years, with the possibility of a third year of funding.

About the Tech Policy Lab

The postdoc will be joining a dynamic and interdisciplinary team of researchers who examine the policy implications of emerging technologies. The Tech Policy Lab is a unique, interdisciplinary collaboration at the University of Washington that aims to enhance technology policy through research, education, and thought leadership.  The Lab brings together experts from the University’s Information School, School of Law, and School of Computer Science and Engineering as well as other units on campus.

The Candidate

The applicant should have a Ph.D. or other relevant terminal degree and prior knowledge of and experience with value sensitive design.  Familiarlity with VSD’s tripartite methodology as well as experience with specific desgn research methods focused on stakeholders, values elicitation, resolving value tensions, and so forth are desired. The ideal candidate will also be interested in the policy and legal implications of emerging technologies, such as artificial intelligence (AI), brain-machine interfaces, and the Internet of Things (IoT). Excellent verbal, visual, and written communication skills are desired, as much of our work requires communicating technical concepts to non-experts.

Questions we may explore over the coming years, particulary with respect to AI and IoT include:

• How can we bridge the gap between technologists and policymakers around the legal, technical, and social considerations associated with emerging technologies? How can we stimulate moral and technical imaginations?
• What methods could be developed to help bring the perspectives of under-represented groups into the early-stage tech policy development processes?
• What mental models could help policymakers and the public better understand and make better decisions about emerging technologies such as AI and machine learning?
• How can we make AI and machine learning algorithms and software development practices more transparent to policymakers and the public?
• How do we characterize responsible innovation? Irresponsible innovation? And communicate about these constructs with policymakers and the general public?
• What new methods and toolkits are needed to do the above?
• What new theoretical constructs?
• What role could public art or other installations play in achieving some of the above?

How to Apply

Interested candidates should send their CV along with a cover letter to Hannah Almeter at halmeter@uw.edu.

Review of applications will begin November 1, 2017.

University of Washington is an affirmative action and equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, gender expression, national origin, age, protected veteran or disabled status, or genetic information.

Securing Augmented Reality Output

A year ago, Pokemon Go became immensely popular as players explored their surroundings for Pokemon in the smartphone-based augmented reality (AR) app. This hyper-popular game, which barely scratched the surface of AR’s potential, led to increased interest in the technology. The AR industry is expected to grow to $100 billion by 2020, and with increasing interest in AR automotive windshields and head-mounted displays (HMDs), we could soon be able to experience immersive AR environments like the one depicted by designer and film-maker Keiichi Matsuda in Hyper Reality.

Hyper Reality Screenshot
But what would happen if a pop-up ad covers your game, causing you to lose? Or if, while driving, an AR object obscures a pedestrian?

These are the types of situations researchers consider in a new paper, Securing Augmented Reality Output. In the paper, Lab student Kiron Lebeck, along with CSE undergraduate Kimberly Ruth, Lab Affiliate Faculty Franzi Roesner, and Lab Co-Director Yoshi Kohno address how to defend against buggy or malicious AR software that may unintentionally or inadvertently augment a user’s view of the world in undesirable or harmful ways. They ask, how can we enable the operating system of an AR platform to play a role in mitigating these kinds of risks? To address this issue, the team designed Arya, an AR platform that controls output through a designated policy framework, drawing policy conditions from a range of sources including the Microsoft HoloLens development guidelines and the National Highway Traffic and Safety Administration (NHTSA)’s driver distraction guidelines.

Arya Driving Scenario
By identifying specific “if-then” policy statements, this policy framework allows the Arya platform to apply a specific mechanism, or action, to virtual objects that violate a condition. In a simulated driving experience, for example, Arya makes transparent pop-up ads and notifications that could distract the driver by applying a specified action, in this case transparency, to objects that violate the specific policies:
• Don’t obscure pedestrians,
• Only allow ads to appear on billboards, and
• Don’t distract the user while driving.

By implementing Arya in a prototype AR operating system, the team was able to prevent undesirable behavior in case studies of three environments, including a simulated driving scenario. Additionally, performance overhead of policy enforcement is acceptable even in the un-optimized prototype. The team, among the first to raise AR output security issues, demonstrated the feasibility of implementing a policy framework to address AR output security risks, while also surfacing lessons and directions for future efforts in the AR security space.

To read more, see Securing Augmented Reality Output.