DataBody Integrity: imagining a digital world that centres dignity
Emerging digital technologies are reshaping our relationship with the human body and mind.
The past years have seen an explosive growth in the amount and variety of online platforms, smartphone apps and other tech-enabled systems that collect detailed physical and behavioural data about us. Think facial recognition in airline boarding, emotion recognition in recruitment, behavioural targeting in e-commerce, predictive tools in tele-medicine, virtual and augmented reality applications.
These are just a few of the many areas where data-driven computational models are reshaping our understanding of the human body, and create never-before-seen access to our innermost thoughts, feelings and desires.
But while access to this type of data could be the soil for important scientific exploration, the absence of strong industry norms and regulations have created a wild wild west in the digital world.
Our current reality is a sea of platforms, apps and tech-enabled systems that cause more harm than good, by jeopardising user well-being and safety.
We have silicone wristbands that notify employers about their staff’s mood changes, mental health apps that sell people’s deepest feelings and anxieties to third party companies, and extended reality applications that facilitate all sorts of digital abuse.
So far only a few companies have designed sufficiently robust measures to mitigate the negative consequences of harnessing biometric and behavioural data.
In the meantime, current and proposed tech industry legislation (including AI regulations, data protection laws, and other related legal instruments across the world) seemed to have failed to create a regulatory framework that meaningfully prevents such harms.
In our age, even innocent data points can fuel complex analysis about our innermost feelings, and the line between our physical and virtual bodies is becoming increasingly irrelevant. Because of that, we need a completely new generation of laws and industry standards to guide digital innovation and experimentation.
To contribute to those efforts, I have recently joined a group of scholars and activists as part of the Mozilla Foundation’s 2023/24 Senior Tech Policy Fellow cohort.
In the coming period, I will explore what we can learn from other, more heavily regulated fields where experimentations with the human body and mind are happening under safer, more controlled circumstances.
Drawing inspiration from clinical trials, medical research and behavioural research, I will identify concrete areas where bodily integrity and autonomy could be more systematically integrated into tech industry norms and regulation — most notably AI regulation and data protection regimes.
Specifically, my research will examine how a more holistic approach to bodily integrity could impact areas areas where digital innovation have the most profound affect on our bodies and minds, including:
- Facial recognition;
- Affect (or emotion) recognition;
- Other types of biometric data collection;
- Predictive analytics tools;
- Fitness and mental health apps, crisis text lines;
- Extended reality technologies; and more.
The ultimate goal of my project is to raise awareness around the enormous impact that emerging digital technologies have on our bodies and minds, and to make sure that our societies are best prepared, both legally and socially, for the enormous transformation that is happening in the tech industry.