The results of a recent study carried out by psychologists at Stanford University (Miller, Herrera, Jun, Landay, & Bailenson, 2020) no doubt raised eyebrows amongst members of the Virtual Reality community. Using data collected from just five minutes of VR exposure, a machine learning algorithm was able to identify individuals from a population size of 500, with 95% accuracy. The collated data analysed by the computer comprised of eighteen different ranges of motion, from varying rotational head movements to nuances in how an individual’s hands came to rest during periods of inactivity (Miller, Herrera, Jun, Landay, & Bailenson, 2020). An alarming point to note was that this data was collected using standard commercial VR equipment; a head-mounted display (HMD) and hand-controllers, rather than the more sophisticated body suits that are largely confined to specialist VR users. The implications of such technological ability can be viewed as something rather benign, assuming that such ability is bounded by ethically and morally scrupulous principles. There is, however, a danger, that such technological ability could be used for the wrong reasons, and in a way that users of VR may not be wholly aware of. This article examines the ethics of capturing biometric data, how it may be used in unscrupulous ways, and what safeguards are in place to protect users of such technology.

Kinematic Data is Prevalent Across all Technologies

It is important to note, that the collection and storage of biometric data is not confined to VR, with many devices in recent years having gone somewhat under the radar in capturing the most personal of all datasets. Wearable tech devices such as e-health bands and smartwatches are major collectors of biometric data, with more advanced technologies such as brain sensing headbands now collecting data as intrinsic as our own brainwaves. Even the latest smartphones are now collecting biometric data, often without us giving it much consideration. Anyone who unlocks their smartphone using their fingerprint or through facial recognition has handed over biological information to the manufacturers of these technologies that is unique to oneself. VR is somewhat special in that it can collect data not only related to our physical makeup, but also data relating to our unique physical movements, which has been coined as our kinematic fingerprint (Slater et al., 2020). Two of the largest players in the VR market, Oculus and HTC have the right to collect and share this data under the premise that the data is de-identified before it is transferred to a third-party (Miller, Herrera, Jun, Landay, & Bailenson, 2020). How do they have this right? It is what we all-too-often do when availing of technology, we accept the T&C’s without scrutinous consideration. One question that always materialises when discussing personal data collection like this is, is there really an issue here? And if so, what does it look like?

Who should have access to our data?

The right to privacy is a major issue when it comes to data collection. It is widely accepted that data that is as unique and as private as our biometric data should remain the property of the individual (Floridi & Taddeo, 2016; Heikkilä et al., 2018). There is, however, a balancing act between keeping such data under the control of the individual and granting the use of such data for the advancement of technology, which in turn can improve our daily lives (Hand, 2018). Termed the privacy paradox, as individuals we try to limit the amount of personal information we give away when using technology, however there is a point or threshold that we reach where we are content in providing this data to another person or organisation in order to avail of the benefits of the technology (Power & Kirwan, 2015). When we provide companies with our personal data, we expect those companies to use the data only for what is necessary in order to provide a product or service that meets our needs, and that this data is stored securely. The collection, storage, and later use of data is largely guided by the ethical principles laid out in the Belmont Report, however this report, published in 1979, is not a binding legal document, and instead serves as a guide for best practice (Hand, 2018). Due to the nature of innovation in the digital technology space, regulation invariably lags behind the birth of new technologies, meaning users can be left exposed to improper or unethical use of their data by big companies and deviant individuals until regulation has caught up (Power & Kirwan, 2015). While this article is not designed to scaremonger people away from using wearable technologies, the risks of handing over biological data so freely may come at a cost. Below are some examples of how this may occur.

Online Behavioural Targeting

“The goal of everything we do is to change people’s actual behaviour at scale. When people use our app, we can capture their behaviours, identify good and bad behaviours, and develop ways to reward the good and punish the bad. We can test how actionable our cues are for them and how profitable for us” – Anonymous Chief Data Scientist of a leading Silicon Valley start-up (Zuboff, 2016).

Online behavioural targeting (OTB) is where companies build profiles of individuals based on their online data, largely collected through a web browsers cache, cookies, and location services (Castelluccia, 2012). A behavioural profile is built for an individual based on their previous actions, with tailored marketing tactics used try to persuade and affect an individuals’ behaviour (Nill & Aalberts, 2014). While this can be beneficial for end users when they are directed to sites or products of interest, the same conceptual idea can be used for more sinister purposes. In 2014, a study was conducted on almost 700,000 Facebook users, without their informed consent, to test whether emotional contagion can exist in online networks (Kramer, Guillory, & Hancock, 2014). By manipulating users News Feeds to either contain (a) more posts with positive content, or (b) more posts with negative content, researchers found that individuals posting behaviours were affected by the positive or negative emotional skew of their News Feed. While lambasted by many in the world of psychological research for the ethically questionable nature of the experiment (Hunter & Evans, 2016; Shaw, 2015), the researchers involved cited Facebook’s data privacy consent form as a means of obtaining informed consent (Kramer, Guillory, & Hancock, 2014), a consent form included in Facebook’s terms and conditions, which all users must accept when creating an account with Facebook. While news stories and pop culture documentaries have made many of us more aware and more vigilant when using the internet, wearable technology poses a problem that we have not faced before. How do we mask physiological responses to emotional stimuli? Some companies are currently developing machine learning algorithms to identify individuals and their emotional states based on their physical movement.

Health and Wellbeing

Physiological data might not only reveal emotional states, but may also pick-up more pressing health issues, such as cardiovascular problems. While this can benefit users if reported correctly to the user and/or an agreed health service representative, it may also have other consequences depending on where the data goes. Could users of VR yet to be diagnosed for cancer be the subject of targeted ad campaigns for cancer treatment? Or may their health insurance premium spike due to data sharing between companies? While wearable technology coupled with AI has been welcomed by many in the healthcare field (Bates, 2019; Mesko, 2017), there is a risk that if data ownership is not clearly defined, private data may become public information.

Security Risks

Most of the latest technology devices are now using physiological features as identifiers for security purposes. While such features reduce the need for passwords and are less prone to breaches, there is a risk that if breached, data which is non-malleable has been conceded to others forever. Using something that is as set-in-stone as biometric data may cause problems for users later in life who have had multiple breaches of their data (Hand, 2018). As the online world looks set to move to a more augmented reality (Slater et al., 2020), fears have been expressed that online versions of individuals will become easier to clone as AI creates more realistic representations of oneself. Advancements in 3D printing may leave an individual more exposed to fraud in the physical world, while also making it more difficult for the judiciary system to rely on evidence that was once seen as bulletproof in court, such as fingertip minutia and DNA.

Thoughts speak louder than actions?

While VR has managed to capture an individual’s kinematic fingerprint, emerging technologies like EEG headsets (electroencephalogram) are now capturing users’ brainwaves to allow users carry out tasks based on learned patterns of brain activity. While this can empower users to search for a webpage using their mind alone, there is the potential for an individual’s brain activity to be used against them. Individuals who display specific patterns when encountering stimuli in their environment may be flagged as homophobic, racist, or displaying signs of paedophilia. While this may sound rather dystopian and hyperbolic, it is important to understand that never before have our thoughts or instinctive reactions been measured in such a scrutinous way. Hence, it is important that we understand who we are giving our data to, and what it is being used for.

What does the future hold?

While regulation catches up with the emergence of these technologies, it is up to the companies involved to largely regulate themselves based on ethics and best practices. Advancements have been made in recent years on the regulation side, notably the European Union’s GDPR Directive, which will take on similar identity in the UK following Brexit (IT Governance UK, 2020). While this is a step in the right direction, there is a fine balance between safeguarding users of technology, and overregulation stifling innovation in this space (Floridi & Taddeo, 2016). Some countries have been slow to regulate the private sector when it comes to data collection, with many seeing such databases as a benefit to the state, which can be accessed when needed (Power & Kirwan, 2015). In the western world this may not be at the forefront of many users’ minds, however, the increasing number of scandals relating to the misuse of personal data by government agencies and big corporations means that as individuals entering into this next phase of the online world, we must be vigilant in protecting our own personal property, and demand more accountability and responsibility from those looking to profit from it.

 

References

Bates, M. (2019). Health Care Chatbots Are Here to Help. IEEE pulse, 10(3), 12-14
Castelluccia, C. (2012). Behavioural tracking on the internet: a technical perspective. In European Data Protection: In Good Health? (pp. 21-33). Springer, Dordrecht.
Floridi L, Taddeo M. What is data ethics? Phil Trans R Soc A. 2016;374: 20160360.
Hand, D. J. (2018). Aspects of data ethics in a changing world: Where are we now?. Big data, 6(3), 176-190.
Heikkilä, P., Honka, A., Mach, S., Schmalfuß, F., Kaasinen, E., & Väänänen, K. (2018, October). Quantified factory worker-expert evaluation and ethical considerations of wearable self-tracking devices. In Proceedings of the 22nd International Academic Mindtrek Conference (pp. 202-211).
Hunter, D., & Evans, N. (2016). Facebook emotional contagion experiment controversy.
IT Governance UK, https://www.itgovernance.co.uk/eu-gdpr-uk-dpa-2018-uk-gdpr, accessed 10/12/2020
Kramer, A. D., Guillory, J. E., & Hancock, J. T. (2014). Experimental evidence of massive-scale emotional contagion through social networks. Proceedings of the National Academy of Sciences, 111(24), 8788-8790.
Mesko, B. (2017). The Role of Artificial Intelligence in Precision Medicine. Expert Review of Precision Medicine and Drug Development, 2 (5), 239-241
Miller, M. R., Herrera, F., Jun, H., Landay, J. A., & Bailenson, J. N. (2020). Personal identifiability of user tracking data during observation of 360-degree VR video. Scientific Reports, 10(1), 1-10.
Nill, A., & Aalberts, R. J. (2014). Legal and ethical challenges of online behavioral targeting in advertising. Journal of current issues & research in advertising, 35(2), 126-146.
Power, A., & Kirwan, G., in Attrill, A. (Ed.). (2015). Privacy and Security Risks Online, Cyberpsychology. Oxford University Press (UK).
Shaw, D. (2016). Facebook’s flawed emotion experiment: Antisocial research on social network users. Research Ethics, 12(1), 29-34.
Slater, M., Gonzalez-Liencres, C., Haggard, P., Vinkers, C., Gregory-Clarke, R., Jelley, S., … & Szostak, D. (2020). The ethics of realism in virtual and augmented reality. Frontiers in Virtual Reality, 1, 1.
Zuboff, S. (2015). Big other: surveillance capitalism and the prospects of an information civilization. Journal of Information Technology, 30(1), 75-89.