The coronavirus pandemic — and the unfolding economic, social and political crises that followed — made it abundantly clear that seemingly logical trade-offs between our fundamental rights are arbitrary, misleading and harmful. In this piece I argue that binarism is not only deeply rooted in our thinking, it is also intertwined with the logic of authoritarian power and exploitative systems. Therefore, it is impossible to meaningfully address any of such collisions on a case-by-case basis — instead, binarism must be challenged at its core.
When Covid-19 began to spread across the world, there were only a couple of confirmed cases in my home country, Hungary. These first cases were vigorously monitored by local authorities, and followed closely by the media and public. In late March, the task force responsible for coordinating protective efforts published a list of everyone in the country who had died of coronavirus thus far. The list was short, and it included sensitive information about the deceased, which made it easy not only to identify them, but also to learn about their underlying conditions.
Activists immediately condemned how the government of Hungary had disclosed the information, saying it would have been easy to de-identify the data. The official response was out of the usual political playbook: it’s either transparency or privacy — you can’t have it both ways.
The argument that our fundamental rights (access to information, privacy, health, security, freedom of expression) should automatically collide is not new or unique, but it has come up even more often since coronavirus started spreading.
“These trade-offs — while they might seem logical — oversimplify the available choices and overlook possible answers and solutions”
In an article describing the controversies of disclosing health data amid a pandemic, New York Times journalist Thomas Fuller said “in the perennial tug-of-war between privacy and transparency in the United States, privacy appears to be winning in the coronavirus pandemic”. And when Covid-19 began to escalate, Maciej Cegłowski — in spite of his long-standing privacy activism — argued that mass-surveillance was indeed essential to fight the pandemic, saying that worrying about the dangers of ubiquitous data collection in these times was like being “concerned about black mold growing in the basement when the house is on fire”.
And yet, these trade-offs — while they might seem logical — are both misleading and harmful. Misleading, because they oversimplify the available choices and overlook possible answers and solutions (which may in fact be feasible, often with only a bit of extra work and investment). Harmful, because they perpetuate existing injustices and contribute to upholding a status quo that benefits certain groups at the expense of others.
Since the escalation of Covid-19, activists have been repeatedly trying to pinpoint the fundamentally flawed nature of these trade-offs.
As a response to the Hungarian government’s data management practices, the Hungarian Civil Liberties Union (HCLU) published a simple visualisation demonstrating how that data could have been published in meaningful, yet privacy-respecting ways. The HCLU also argued that more often than not, access to information regimes do not even refer to the same types of data as privacy protection regulations. In other words, we may need increased scrutiny on government dealings, but the same level of transparency is irrelevant and unnecessary when it comes to the sensitive data of average citizens.
The health versus privacy debate has been a similarly intriguing one. In the complex discussion about contact tracing apps, activists have repeatedly pointed out that it is possible to achieve public health benefits without abusive surveillance, and that relying on contact tracing technologies is both invasive and ineffective. Privacy International, for instance, claimed that while some of these apps may work better than the others, none of them can replace manual contact tracing. Others like the Ada Lovelace Institute or Access Now provided detailed guidance on the considerations health agencies should take when developing new technologies amid a pandemic.
Activists have also argued that not only is it possible to reconcile access to health care with privacy, it is also imperative that we do so. As many have reminded us, surveillance infrastructures have a tendency to become “permanent fixtures” far outside their original mandate — we’ve already seen that to happen in the global panic after 9/11, when governments argued that the threat of terrorism outweighed abstract concerns about privacy. And yet, the post-9/11 surveillance infrastructures have proven both harmful and largely ineffective in the fight against terrorism.
“Nuance and safety often fall prey to speed and efficiency”
So why are we so frequently asked to make such difficult choices between rights that should be granted to all of us by default?
We know that the human mind has a tendency to think in binary opposites — as cognitive science and philosophy have both shown. Much of our narratives tend to be driven by a conflict between opposing, and mutually exclusive, forces: the ‘us’ versus ‘them’, and the ‘either’ and ‘or’. (Notably, data protection is often included in the equation these days, but binary thinking is obviously far more wide-reaching than privacy issues.)
It would be so easy to assume that ‘“either-or” thinking is a byproduct of time constraints, lack of knowledge or resources, and that binary thinking is merely a side effect of urgency and pressure. And it’s true that in many cases, those factors are indeed the root cause. In misguided attempts to design new systems swiftly (for instance, because of a pandemic), nuance and safety often fall prey to speed and efficiency.
But more often than not, these false trade-offs between our rights and freedoms do end up benefiting certain groups at the expense of others. For instance, corporations selling their dubious technology products to public agencies and employers, big tech companies (like Palantir, Zoom or Clearview AI) using the virus to further consolidate their market share, authoritarian governments blaming the pandemic for why they are avoiding scrutiny, and so on. Many corporations and governments extract crisis situations to further their own political and economic gains — it is therefore unrealistic to assume that they have any interest in resolving such binaries out of sheer respect for our human rights.
In her book Surveillance Capitalism, where she explains the alarming fusion between digital capabilities and free-market ambitions, Shoshana Zuboff says that expecting big tech companies to refrain from surveilling us would be like “asking a giraffe to shorten its neck, or a cow to give up chewing”. Such demands, she argues, pose existential threats to these organisations, since it is their very baseline business model to design lucrative digital products through extracting and analysing our data. Instead of pursuing more ethical data collection practices, tech companies therefore present us with a simple choice (which to many may seem perfectly logical): to gain access to certain digital services, we have to sacrifice our privacy in return.
The same logic works for practically every trade-off we are asked to make between our fundamental rights and freedoms — transparency versus privacy, privacy versus health, or even when Facebook tells us that credible information is only achievable through jeopardizing our freedom of expression. These false choices are fuelled by people’s genuine fear that we will lose access to something crucial unless we sacrifice something else in return — and this fear has only increased in a digital era, where technical knowledge has been accumulated in the hands of a relative few.
But these trade-offs cause great harm to our societies. They deepen existing injustices and perpetuate unfair distribution of power and resources. As my colleague Zara Rahman reminds us in a piece where she explores the possibilities of using Covid-19 surveillance against Black Lives Matters protesters, widespread surveillance mechanisms often end up being misused by those in power and inevitably result in human rights violations, as well as punitive law enforcement practices. Such practices then always disproportionately affect vulnerable groups: people of colour, migrants, LGBTQI communities — those who are already disadvantaged by the unfair distribution of public resources.
“Our rights are not a zero-sum game”
As an activist working on the intersections of technology, rights and power, I’m often told that civil society’s demands can be unrealistic and too abstract. But I have seen multiple examples of how the reconciliation of seemingly opposing forces becomes ultimately possible — when those responsible for the design of these systems aspire for synthesis, instead of collision.
Ultimately though, it is impossible to meaningfully address any of such collisions on a case-by-case basis if we do not challenge our binarism at its core. What this means is that not only should we not accept these false trade-offs, we must actively seek more thoughtful and nuanced solutions and answers from our political and industry leaders. And whenever we are presented with a seemingly logical choice between our freedoms, we need to fundamentally question the underlying forces that fuel that binarism in the first place, instead of simply accepting that we have no real choice. While such thinking is not always easy — it requires openness, creativity, investment, collaboration and critical thinking — it is imperative that we do so, especially if we want to see new and better solutions to our existing problems, and live in societies that work for everyone, not just a few.
Because ultimately, while resources may be a zero-sum game, our rights do not have to be.
(This post has been updated on June 21.)