Fourth Industrial Revolution

Industry review boards are needed to protect VR user privacy

Attendees demo Oculus Go virtual reality headsets during Facebook Inc's F8 developers conference in San Jose, California, U.S., April 30, 2019

The movements you make playing VR and AR games create a huge amount of personal data Image: REUTERS/Stephen Lam

Jessica Outlaw
Director, Outlaw Center for Immersive Behavioral Science, Concordia University
Susan Persky
Head of Immersive Simulation Program, National Human Genome Research Institute
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Fourth Industrial Revolution?
The Big Picture
Explore and monitor how Fourth Industrial Revolution is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Fourth Industrial Revolution

It seemed like a game when Riley first started the virtual reality (VR) maze. He used a room-scale setup, so by physically walking around his living room, he could solve puzzles and visit different parts of the virtual maze. His friends were networked into the game, so even though they were in their own living rooms, when Riley turned his head he could make eye-contact with them. He could even give them virtual high-fives - slapping avatar hands gave him the sensation of haptic feedback from his controller.

What Riley didn’t know was that the startup that created this game had decided to sell its users’ tracking data. Riley also didn’t know that a 20-minute VR game session recorded 2 million points of data about his body movement, and that an insurance company was one of the customers buying the game data. A month after playing the game, Riley was turned down for a new life-insurance policy. Given his excellent health, he couldn’t understand why. Several appeals later, the insurance company disclosed that Riley’s tracking data from the VR maze game revealed behavioral movement patterns often seen among people in the very early stages of dementia. Later, Riley’s sister, who had not played the VR maze game, was also rejected for life and long-term care insurance policies, as dementia tends to run in families.

This is a hypothetical situation, but the science of using movements tracked in VR to predict dementia, and the technology to do so, are very real. Currently, there are no standards or regulations as to how this data is collected, used or shared.

Virtual and augmented reality (VR and AR) biometric tracking data - micro-movements of head, torso, hands, and eyes - can be medical data. It can diagnose or predict anxiety, depression, schizophrenia, addiction, ADHD, autism spectrum disorder and more about a person’s cognitive and physical function. Because VR and AR applications can detect changes over time in these disease-linked states, developing successful therapeutic interventions will be possible.

The issue here, though, is the unintended consequences that arise when such medically-relevant data is fed into users’ psychometric profiles. Such profiles may start out as relatively harmless, merely predicting when someone might be getting ready to buy a new car. However, sprawling psychographic profiles with medical inputs could leave people vulnerable to the type of scenario outlined above.

Anonymizing VR and AR tracking data is nearly impossible because individuals have unique patterns of movement. No person throws a ball exactly the way that someone else does. Using gaze, head direction, hand position, height, and other behavioral and biological characteristics collected in VR headsets, researchers have personally identified users with 8 to 12 times better accuracy than chance.

In one study where researchers collected data at 95 time points in VR, they could identify an individual with 90% accuracy. Just like zip code, IP address, and voiceprint, VR and AR tracking data should be considered potential ‘personally identifiable information’ (PII) because it can be used to distinguish or trace an individual's identity, either alone or when combined with other personal or identifying information.

This is an issue that professionals in the medical research world have been grappling with for decades. Researchers have shown that health and medical information like DNA sequences, medical records, and health research data stripped of names and other identifying information, can be traced back to individuals by combining this data with other publicly available data sources. Unlike medical data, however, data collected by VR technologies is currently unregulated, and how it is collected, used and shared is not monitored by any external entity.

In late 2018, VR and AR privacy policy professionals representing major players in hardware and software, as well as experts from academia and non-profits, were invited to attend a privacy summit at Stanford University to review the risks of this technology and to ideate potential solutions. The main areas of concern raised were loss of freedom, harm to reputation, and decrease in access and opportunity due to online identities becoming inseparable from offline ones.

Have you read?

VR and AR data misuse could cause people to lose control of their identity and how they choose to present themselves to employers, insurers and others. There was special concern for the vulnerability of children who may be tracked using this technology from a young age and who are not capable of consenting to these risks.

Summit attendees generated a range of solutions that could be employed to protect user privacy, which were subsequently voted on. Some recommendations were:

· Limiting the collection of biometric data by VR and AR devices, either by disallowing collection of raw data, or automatically deleting the data after a defined period to eliminate the possibility of longitudinal data collection

· Explicit communication of the data policy (what’s being collected and how it being used) in plain, clear language

· Not limiting access to an experience based on who opts-in to data collection, and never making opt-in the default setting

· When data policies change, all users should be asked to opt-in again; consent should not be grandfathered through multiple iterations of a company’s privacy policy

· Awareness that acquisition is a weak spot in privacy policy - if one company is acquired by another, user biometrics should not be transferred to the purchasing company without re-consenting users

One solution rose to the top at the summit: the adoption of a system similar to the institutional review boards (IRBs) that exist in universities, medical centers and companies across the world. A traditional IRB reviews researchers’ proposals to ensure that when research participants consent to become part of a research study, the study is conducted in an unbiased way that preserves their autonomy, and that the risks of their participation are minimized. Unlike more general tech advisory boards, IRBs are independent by definition, with diverse membership, and are focused on ethics, justice and respect for those who are the source of research data.

We believe that an IRB model can be successful in the context of VR and AR because potential harms are relatively well-understood, and possible solutions (for example, requiring clear, prospective user consent to data use activities) align with existing IRB approaches. The hope is that this type of review model will be adopted industry-wide for VR and AR. One way it could work is for a centralized group of experts to review the privacy policies of each company. Their role would be to assess the risks to users of each company’s data collection, storage, and usage policies. This independent board would be responsible for identifying the magnitude and probability of harm and recommend steps that can be taken to minimize potential harm to the user.

IRBs were borne out of an unfortunate history of unethical scientific research in which people were unfairly injured, particularly those from disadvantaged backgrounds. Although AR and VR are relatively new consumer technologies, we perceive that the current industry is heading toward an experiment in human behaviour with the potential to harm.

Our proposal is to create an independent voice in VR and AR that advocates for the protection of users. In turn, we believe that these protections will draw more consumers to immersive technology. A significant amount of testing, iteration and refinement would be needed to make a review board strategy viable. Given the correct level of execution, we are optimistic about the potential for this solution to make VR and AR a truly user-friendly technology.

Written with input from:

Jeremy Bailenson, Founding Director, Virtual Human Interaction Lab, Stanford University

Philip Rosedale, Founder & CEO, High Fidelity

Kent Bye, Journalist and podcast producer, The Voices of VR podcast

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

Space: The $1.8 Trillion Opportunity for Global Economic Growth

Bart Valkhof and Omar Adi

February 16, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum