Art from Hülya Özdemir

Bodies of data or“databodies”?

Julia Keseru
7 min readJan 26, 2022

--

In this piece I argue that predatory technologies became our digital default because of the naive assumption that the Internet would keep our bodies invisible. As a result we now have a largely unregulated technology industry, with data protection regimes that cannot guarantee the integrity of our physical bodies. This essay is the third in a series where I explore the connection between bodily integrity and our online reality. You can read the first two pieces here and here.

As of next summer, tax filers in the United States will only be able to access certain online services if they go through an identity verification process that uses facial recognition software. The details of what exactly the Internal Revenue Service will require of American taxpayers are somewhat confusing still, but the news redirected the public’s attention on the controversies surrounding mainstream avenues of biometric data collection — most notably facial recognition technologies.

The evolution of facial recognition has seen a rapid explosion in the past decade. The dramatic improvement in the technology was enabled by an exponential growth in the amount and quality of pictures that people posted online, but as technology journalist Karen Hao explains, while initial experimentations tended to seek full participant consent, these early systems struggled to yield good enough results, driving researchers to look for ‘larger and more diverse data sets’.

As a result, more and more pictures got downloaded without people’s consent, paving the way for increasingly sophisticated surveillance technologies and the erosion of our societal norms around consent and privacy.

Art from Mathilde Aubier

Notably, Facebook (now Meta) was one of the industry pioneers who took the technology to a next level, by developing DeepFace, a deep-learning model that was supposed to make the process of identifying people on the platform much easier. At its release the automatic tagging mechanism was the default, but in response to a series of lawsuits, Facebook decided to create an opt-in process instead. Last year, the company went as far as to announce that it would shut down DeepFace, thanks to mounting public concern about “the place of facial recognition technology in society”.

Digital rights activists around the world applauded the decision, noting that such erosion of consent has long-lasting side effects (other than being genuinely “creepy”, as Neil Richards notes). This includes the rise of emotional disorders like anxiety and depression, the gradual corrosion of free expression, the flourishing of biased and discriminatory systems, the inability to protect ourselves against power abuses, and many other crucial societal problems.

But while the decision to shut down DeepFace might have been a welcome shift, Facebook’s voluntary move is likely going to remain the exception to the rule, not our new digital default.

The evolution of predatory tech

In the early ages of the Internet, many people hoped that digital technologies would have an ‘equalising’ effect on our social institutions, by keeping our bodies invisible, and thus erasing exclusion and discrimination that builds on physical hierarchies.

Since then it has become quite clear that this assumption was naive at best, and harmful at worst. Research consistently shows that people with darker skin tones are a lot more likely to be mis-recognised by image recognition technologies, to be over-surveilled, wrongfully jailed, and otherwise harmed by automated decision-making systems. Women and gender non-conforming bodies bear the lion’s share of the abuse that is happening online. People living with disabilities are being left behind by inaccessible and discriminatory new technologies.

Everydays: the First 5000 Days by Mike Winkelmann

But the rather naive image of tech acting as a great equaliser has done much more harm than ‘just’ the exploitation of the vulnerable. As researchers Corinne Cath and Becca Lewis argue, the assumption that our bodies would remain invisible in the virtual sphere has left us with an unregulated communication infrastructure, creating exploitative dynamics that have since become mainstream in the entire tech industry.

In the meantime, data protection systems have been struggling to keep up with the pace of technological innovation. While many data protection laws–like the EU’s General Data Protection Regulation or California’s Consumer Privacy Act– should be applauded for their strong mandates around user autonomy and consent, consent and autonomy have proven rather difficult to implement in a world where datasets are vast and connected, where digital interactions are so complex, and technologies improve so fast.

To illustrate, when we consent to our pictures being used for building a new feature or a novel algorithm, most of us, average users, have no idea how Facebook’s decision to create an automatic tagging mechanism might affect our lives in the long run; how our social relations may change if facial recognition becomes mainstream; what type of wrongdoing becomes possible if our pictures, and with it our biometric data, gets into the wrong hands; and more. And even if we do understand the consequences, opting out might be just as complicated or confusing. To go back to the example of Facebook, the company had previously argued that DeepFace’s automatic tagging mechanism was designed to protect its users from fraud and impersonation –a claim that would be rather difficult to fact check by an average user.

Yet, simply because autonomous and informed decisions are not always feasible, we still need certain norms to guide our interactions with other people’s virtual boundaries.

This is where bodily integrity, as a concept, may come in handy.

Bodily integrity in the digital age

Bodily integrity has been defined by philosopher Martha Nussbaum as people’s right to remain intact from any physical interference –our freedom to determine the broader contours around what may, and may not, happen to our bodies.

The concept is now a cornerstone of human rights where such violations are regarded as un-dignifying at best (like forcing a child to do something against their will) and criminal at worst (like sexual abuse, torture or genital mutilation). Bodily integrity also plays an increasingly important role in medical ethics, acknowledging that even when they give up some of their autonomy in exchange for crucial medical services, most patients do so without sacrificing their physical boundaries. In other words, even when we cannot decide whether a certain medical intervention is good for our bodies or not, we still expect doctors to treat our bodies in ways that uphold our dignity and integrity.

Illustration by GlebGleb

But despite the fact that it gained a foothold in multiple areas and disciplines, the concept of bodily integrity is not (yet) an integral part of the laws and norms that guide digital interactions.

On the one hand, this is perfectly understandable — data is an abstract concept that is often presented to us as something cold, rational, objective and neutral. Our physical boundaries and sensory processes are difficult to translate into the digital realm, and so the impact of online intrusions remain hard to grasp.

On the other hand, by now it has become clear that digital technologies have a profound impact on our bodies, in both direct and indirect ways: they affect how we see ourselves in the mirror, how we express and suppress our basic biological needs and desires, or how we communicate about our innermost thoughts and feelings. Data about us is no longer just content — it is a manifestation of our relationship with our bodies, an extension of our will, personhood and agency.

Because of that, researchers, activists and policy-makers around the world have been arguing that our bodies require stronger protection in the digital age, and that the laws and norms that govern our digital interactions need to pay more attention to the integrity and dignity of end users. Such a paradigm shift (from autonomous decisions to a more holistic approach to physical integrity) could/should translate in a few important changes, including, for instance:

  • Stricter rules on online mechanisms that have a potential impact on our real-world bodies. This could include any data collection effort that affects our bodily characteristics — e.g. facial recognition, emotion recognition, and other forms of biometric data collection.
  • Stricter rules on “digital experimentations”, understood broadly as all online interactions where information about our innermost thoughts, moods and feelings gets collected, analysed and modified — including some predictive products, ad targeting, and online persuasion architectures.
  • Better avenues, both legal and technical, to grasp how a certain data collection process could affect our digital integrity, how much data has been collected about us in the past, and who may have access to that information;
  • Alternative ways to opt out of a data collection process without any repercussions, as a way of protecting one’s digital dignity, and meaningful consent mechanisms in instances where data collection is justified and necessary;
  • Healthier sense of digital boundaries, especially amongst children and other vulnerable groups who are the most susceptible to abuse, both online and offline;
  • Alternative data governance models that enable users to store their data through trusted intermediaries; robust oversight channels, and well-resourced legal clinics that provide remedies to end users.

The pandemic made it abundantly clear that disconnecting from the Internet is no longer an option for a majority of the population, and that creating a healthier online experience is going to be crucial for our collective mental health. While refocusing our attention on our bodies has its own limits and complications from a digital perspective, it might be a good first step in the right direction — especially in instances where data has such a profound impact on our physical well-being.

Stay tuned for the next part of the series where I explore the limitations and complexities of introducing bodily harms to the digital picture. And reach out at jkeserue@gmail.com if you have any thoughts, questions or ideas about this topic!

Copyright © Julia Keseru

--

--

Julia Keseru

Activist, writer, occasional poet. People nerd, cancer survivor. Interested in technologies, justice and well-being. https://www.jkeserue.com/