Glossary
Term |
Meaning and Example Sentence |
---|---|
AI surveillance |
Using computers and cameras to watch and analyse what people do. Example: “AI surveillance helps city officials monitor traffic flows.” |
Autonomy |
The ability to make your own choices and control your life. Example: “Autonomy lets you decide what to read without pressure.” |
Self-censoring |
Changing behaviour because you feel watched. Example: “After noticing cameras, he began self-censoring his conversations.” |
Bias |
Treating people unfairly because of who they are. Example: “The AI showed bias by misidentifying faces with darker skin tones.” |
De-skilling |
Losing important skills due to over-reliance on technology. Example: “De-skilling happens when people let AI choose all their meals.” |
Questions
What is AI surveillance, and why does it threaten privacy and freedom?
How can AI surveillance create bias and unfair treatment?
What does it mean to lose autonomy under constant surveillance?
Give an example of one country’s system monitoring people in another country.
Which legal articles in China, the United Kingdom, and the United States allow authorities to compel companies to provide data across borders?
Answer Key
AI surveillance uses computers and cameras to monitor and analyse behaviour. It threatens privacy and freedom because constant observation can make people feel watched and limit their choices (AI Safety Report, 2025; Number Analytics, 2025).
If training data exclude certain groups, AI may misidentify or unfairly target them, producing biased outcomes (Number Analytics, 2025; CFMA, 2025).
Autonomy declines when people modify actions to avoid surveillance consequences, a behaviour called self-censoring (Hertie School, 2018; Krook, 2025).
Uganda’s Huawei camera network sends data to servers in China, allowing Chinese authorities access under Chinese law (Privacy International, 2020; ICTworks, 2024).
China’s DSL Article 36 and Cybersecurity Law Article 28 compel companies to assist authorities (Data Security Law, 2021; Cybersecurity Law, 2017). The UK’s Investigatory Powers Act 2016 Section 253 authorises technical capability notices (Investigatory Powers Act, 2016). In the U.S., the CLOUD Act and FISA Section 702 require providers to disclose data regardless of location (CLOUD Act, 2018; Office of the Director of National Intelligence, 2023).
References
AI Safety Report. (2025). AI surveillance risks include privacy invasion, bias, discrimination, lack of transparency, and impact on personal freedom and autonomy. https://private-ai.com/en/2025/02/11/ai-safety-report-2025-privacy-risks/
CLOUD Act. (2018). Clarifying Lawful Overseas Use of Data Act, 18 U.S.C. § 2713.
CFMA. (2025, June 11). AI, surveillance, and the fracturing of sovereignty: Ethical concerns in cross-border technology use. https://thecfma.org/2025/06/11/ai-surveillance-and-the-fracturing-of-sovereignty-ethical-concerns-in-cross-border-technology-use/
Cybersecurity Law of the People’s Republic of China. (2017), Article 28.
Data Security Law of the People’s Republic of China. (2021), Article 36.
ESET. (2023, May 1). How algorithms influence a child’s worldview. https://www.eset.com/blog/en/how-algorithms-influence-a-childs-worldview/
Hertie School. (2018, January 1). The threat to human autonomy in AI systems is a design problem. https://www.hertie-school.org/en/digital-governance/research/blog/detail/content/the-threat-to-human-autonomy-in-ai-systems-is-a-design-problem
IBM. (2025, April 28). IBM delivers autonomous security operations with cutting-edge agentic AI. https://newsroom.ibm.com/2025-04-28-ibm-delivers-autonomous-security-operations-with-cutting-edge-agentic-ai
ICTworks. (2024, November 20). PRC malign influence is inspiring Ugandan digital authoritarianism. https://www.ictworks.org/prc-foreign-malign-influence-digital-authoritarianism-uganda/
Investigatory Powers Act. (2016). Section 253: Technical capability notices. https://www.legislation.gov.uk/ukpga/2016/25/section/253
Krook, J. (2025). When autonomy breaks: The hidden existential risk of AI. arXiv. https://arxiv.org/abs/2503.22151
Lawfare. (2025, January 5). The authoritarian risks of AI surveillance. https://www.lawfaremedia.org/article/the-authoritarian-risks-of-ai-surveillance
Number Analytics. (2025, May 27). The ethics of AI surveillance. https://www.numberanalytics.com/blog/ethics-of-ai-surveillance
Office of the Director of National Intelligence. (2023). Section 702 of the Foreign Intelligence Surveillance Act: Fact sheet. https://www.intelligence.gov/702
Privacy International. (2020, June 25). Huawei infiltration in Uganda. https://privacyinternational.org/case-study/3969/huawei-infiltration-uganda
Privacy International. (2023, March 15). Enforcement of data protection laws around the world. https://privacyinternational.org/long-read/4837/enforcement-data-protection-laws-around-world
No comments:
Post a Comment