Sunday, June 29, 2025

Privacy and Artificial Intelligence - 1.9 Surveillance and Loss of Autonomy (A)

1.9 Surveillance and Loss of Autonomy

Introduction

Picture a world where every step you take, every place you visit, and even some of the choices you make are quietly watched and recorded by computers and cameras. In artificial intelligence (AI), this is becoming reality through AI-powered surveillance—technology that monitors, tracks, and analyzes people’s actions. Although proponents claim such systems can deter crime and improve safety, they also raise serious concerns about privacy and personal freedom. Constant observation can make people feel watched and limit their ability to choose freely, leading to a loss of autonomy (AI Safety Report, 2025; Number Analytics, 2025).

Technical or Conceptual Background

Modern AI surveillance relies on cameras, sensors, and sophisticated software to collect and process data about individuals and groups (IBM, 2025; Lawfare, 2025). These systems can recognise faces, follow movements, and even predict future actions. When trained on biased or incomplete data, they misidentify or unfairly target certain groups, reinforcing discrimination (Number Analytics, 2025; CFMA, 2025). Continuous monitoring also pushes people to modify speech, dress, or behaviour to avoid scrutiny—a response known as self-censoring (Hertie School, 2018; Krook, 2025).

Current Trends and Challenges

AI surveillance is expanding worldwide. Governments deploy it to manage traffic, watch protests, and filter online content, sometimes suppressing dissent (Lawfare, 2025; CFMA, 2025). Transparency is scarce; citizens rarely know when monitoring occurs or how the AI reaches its decisions (Number Analytics, 2025).

AI surveillance can also cross borders, with one country’s system monitoring people in another country. Uganda, for example, installed nationwide “Safe City” cameras with facial-recognition technology supplied by the Chinese firm Huawei in 2019 (Privacy International, 2020). Camera footage is transmitted to Huawei’s cloud infrastructure. Because Huawei’s core engineering teams are based in China, technicians there can access the data for maintenance or updates, enabling Chinese authorities to access information about Ugandan citizens if required by Chinese law (Privacy International, 2020; ICTworks, 2024).

Under the Data Security Law of the People’s Republic of China, Article 36 obliges organisations in China to comply with government requests for data disclosure related to national security (Data Security Law, 2021). The Cybersecurity Law likewise requires companies to provide “technical support and assistance” to security agencies during criminal or national-security investigations (Cybersecurity Law, 2017).

Other countries have similar rules. In the United Kingdom, Section 253 of the Investigatory Powers Act 2016 allows the Home Secretary—subject to judicial approval—to issue a technical-capability notice that compels telecom operators to build or maintain tools that give investigators access to data or decrypt communications (Investigatory Powers Act, 2016).

The United States offers another parallel. The Clarifying Lawful Overseas Use of Data (CLOUD) Act lets U.S. courts compel domestic service providers to produce data “regardless of whether such communication … is located within or outside of the United States,” amending the Stored Communications Act (CLOUD Act, 2018). Section 702 of the Foreign Intelligence Surveillance Act authorises U.S. intelligence agencies to direct electronic-communication service providers to furnish data on non-U.S. persons located abroad (Office of the Director of National Intelligence, 2023).

Because organisations may be subject simultaneously to Chinese, UK, and U.S. disclosure mandates, complying with one country’s law can conflict with another’s privacy protections, deepening the cross-border surveillance risk (CFMA, 2025).

AI systems are also beginning to steer personal decisions—recommending what to read, watch, or buy—and over-reliance on such guidance can erode critical-thinking skills, a process called de-skilling (Krook, 2025; Hertie School, 2018). Excessive dependence on automated recommendations further undermines human autonomy. Think of a streaming app that keeps suggesting cartoons: if you always watch whatever pops up next, you may stop picking new shows yourself and miss the fun of choosing (ESET, 2023).

Mitigation Challenges and Shortcomings

Limiting AI surveillance is difficult because privacy laws differ widely and are enforced inconsistently (CFMA, 2025; Privacy International, 2023). Even where strong rules exist, opaque AI models make errors hard to detect and correct (Number Analytics, 2025). Excessive dependence on automated recommendations further undermines human autonomy, discouraging independent thinking (Krook, 2025).


No comments: