top of page
ipprwriters

Big Tech and Privacy Rights

Author: Claudia Maggi

Editor: Ruth Lucas



By collecting and storing data of users' search actions, big tech firms fabricate behavioural predictions that they then sell to their real customers: advertisers. This logic of accumulation was termed ‘Surveillance Capitalism’ by Shoshana Zuboff in her renowned book - The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. Given the sophistication and scale of tech companies’ surveillance practices, the use of people’s online activities as algorithms’ raw material raises concerns over privacy issues and the limits of regulation in protecting citizens from illegal activities.


The first concern regarding users’ privacy relates to the concept of consent. When individuals confirm to have read and agree with the ‘terms and conditions’, they theoretically have given consent for their data to be stored and processed. However, these terms are considered ineffective at informing about privacy decisions: studies have shown that users hardly read the policies, but even if they read them, the guidelines are too jargonistic to be understood by the general public. If the decision is uninformed, consent is likely not to be genuine. Additionally, tech companies’ monopoly over the market means that many users do not have alternatives but to share their data. A European Commission survey found that around 60% of Europeans feel like they have no choice but to provide their personal information to access certain services. Consequently, permission for data use becomes an imaginary concept.


A further issue regards whether privacy rights are being eroded or redistributed. According to US Supreme Court Justice William O. Douglas, privacy entails the decision rights of an individual on whether to disclose or reveal information. Under this viewpoint, the right to privacy means having the option to choose to keep something secret. Big tech firms’ data collection hands them the decision rights over the secrecy of people's lives, as they gained the control to use or share the withheld information. Therefore, it is worth wondering whether the hyper-scale of data accumulation practices represents an anti-democratic threat, and to what extent it should be limited and regulated.


Moreover, data collection practices are increasingly undetectable and pervasive. Suitable examples include the so-called digital assistants like Amazon Alexa and Google Home. Alexa has become as ubiquitous as possible, adapted to be the voice interface for controlling home systems and appliances. Open to third-party developers, Alexa also acquired hundreds of skills, such as reading the news and recipes or calling an Uber. This life operating system collects data and constructs behavioural predictions for sale - not only for adverts but also for real-world services, such as house cleaning and restaurant delivery. As Alexa's senior vice president stated, “our goal is to try to create a kind of open, neutral ecosystem for Alexa... and make it as pervasive as we possibly can”. Therefore, by inviting digital assistants into our homes, we allow surveillance capitalism to monitor and process our living habitats.


Lastly, technical complexities limit regulatory capacity to protect citizens from illegal privacy breaches. Under the name of protecting ‘user privacy’, firms obscure their data collection operations and capitalise on knowledge asymmetries until they face legal opposition. The capacity to exploit public ignorance stems from the fact that these algorithms “were constructed at high velocity and designed to be undetectable”. For example, Google received a fine of $391.5 million for tracking the location of users that opted out of sharing their location data between 2014 and 2020. The fact that it took six years for regulators to detect the illegal practices underlines the public sphere's limited ability to detect risks when facing technical complexity.


The discussion highlights numerous concerns over the legal and illegal uses of data by big tech firms. First, although behavioural data improves the service for the user, it also allows firms to take advantage of the limited time, knowledge and attention of users for revenue and growth. Second, numerous scandals of privacy breaches have demonstrated the limitations of the law in adapting to quickly evolving technologies. In light of these threats, we should now ask ourselves how regulation can be made up-to-date with the evolving meanings of ‘user privacy’ and new technological complexities.



49 views0 comments

Recent Posts

See All

Comments


bottom of page