1.2 Unauthorized Access and Data Breaches
Introduction
Imagine if someone sneaked into your secret clubhouse without asking and took your favorite toys or read your diary. In the world of computers and artificial intelligence (AI), this is called unauthorized access—when someone gets into a computer system or data storage without permission. When these intruders steal, copy, or reveal private information, it is called a data breach (IBM, 2025). These digital break-ins can cause big problems, such as making people feel unsafe, losing money, or not trusting technology anymore.
Technical or Conceptual Background
Unauthorized access happens when someone who is not allowed to use a computer or see certain information finds a way to get in anyway. This can be done by guessing passwords, using old accounts that were never deleted, or taking advantage of weak spots in computer programs (PCPD, 2025). For example, if a company forgets to delete a temporary account used for testing, a hacker might find and use it to sneak into the system.
A data breach is when personal or secret information is exposed to people who should not see it. This could be names, addresses, passwords, or even medical records. AI systems can make these problems worse because they often collect and store lots of personal data to learn and make decisions. Sometimes, attackers trick AI systems using special commands, called prompt injection, to make them reveal secrets or act in ways they shouldn’t (Metomic, 2025). AI also brings new risks, such as data poisoning, where attackers give AI bad information so it learns the wrong things (U.S. Department of Defense, 2025).
AI systems are especially attractive targets because they can remember lots of information, making it possible for attackers to steal private details from many people at once. Traditional security tools might not notice when someone is using prompt injection or poisoning the AI’s data, so new kinds of protection are needed (U.S. Department of Defense, 2025).
Current Trends and Challenges
In recent years, data breaches have become much more common and serious. In the first half of 2024 alone, the number of people affected by data breaches jumped by 490%, with over 1.1 billion individuals—surpassing the total number of victims from all of 2023 by the second quarter alone (Fast Company, 2024). Some of the biggest breaches involved companies like Snowflake, Infosys, and Prudential, where millions of people’s information was exposed at once.
Certain industries, like banks and hospitals, are especially at risk because they store very sensitive information. In 2024, attacks on financial services increased by 67%, and healthcare organizations were also major targets (Fast Company, 2024). AI systems are now involved in many of these breaches. A recent survey found that 73% of companies using AI experienced a data breach related to their AI systems in 2024 or 2025, with each breach costing an average of $4.8 million (Metomic, 2025).
One big challenge is that many companies use old computer programs that no longer get security updates. These are called end-of-support systems, and they are like leaving your house unlocked because the lock is broken and you can’t fix it anymore (PCPD, 2025). Another challenge is that companies are adopting AI very quickly, but they are not spending enough on security to keep up with the new risks. This means there are more opportunities for bad actors to find ways in.
Mitigation Challenges and Shortcomings
Even though companies know about these dangers, they often have trouble fixing them. One reason is that upgrading old computer systems can be expensive and complicated, so some companies wait too long to make changes. Another problem is that there are not always clear rules for how to manage temporary accounts or permissions, so mistakes happen and doors are left open for attackers (PCPD, 2025).
AI systems add extra layers of difficulty. Traditional security tools might not notice when someone is using prompt injection or poisoning the AI’s data. It also takes longer to discover and stop breaches involving AI—on average, 290 days compared to 207 days for regular computer systems (Metomic, 2025). Finally, even when breaches happen, companies do not always explain how the attack worked, making it hard for others to learn and improve their own defenses (Fast Company, 2024). New regulations and standards are being developed, but many organizations still struggle to keep up with the fast pace of change (U.S. Department of Defense, 2025).
Glossary
Term |
Meaning and Example Sentence |
---|---|
Unauthorized Access |
When someone enters a computer system without permission. Example: “The hacker gained unauthorized access to the school’s database.” |
Data Breach |
When private information is stolen or exposed. Example: “The data breach revealed customers’ credit card numbers.” |
End-of-Support Software |
Computer programs no longer receiving safety updates. Example: “Using end-of-support software is like leaving your clubhouse door unlocked.” |
Prompt Injection |
Tricking an AI into doing something harmful. Example: “The attacker used prompt injection to make the AI reveal passwords.” |
Temporary Account |
A short-term login that should be deleted after use. Example: “The forgotten temporary account let hackers into the system.” |
Data Poisoning |
Giving an AI bad information so it learns the wrong things. Example: “The attacker used data poisoning to make the AI give wrong answers.” |
Sensitive Information |
Private details like names, addresses, or medical records. Example: “Doctors must protect sensitive information about their patients.” |
Questions
What is the difference between unauthorized access and a data breach?
Why are outdated software and temporary accounts considered security risks?
How have data breach trends changed in recent years, and which industries are most affected?
What are some unique risks that AI systems introduce in the context of data breaches?
What are two major challenges organizations face in preventing unauthorized access and data breaches?
Answer Key
Suggested Answer: Unauthorized access is when someone gets into a computer system or network without permission, while a data breach is when private or sensitive information is stolen, copied, or exposed because of that access (IBM, 2025).
Suggested Answer: Outdated software is risky because it no longer gets security updates, making it easier for attackers to break in. Temporary accounts are risky because if they are not deleted after use, attackers can find and use them to enter the system (PCPD, 2025).
Suggested Answer: Data breach victims increased by 490% in the first half of 2024, with over 1.1 billion people affected. Financial services and healthcare are the most affected industries because they store very sensitive information (Fast Company, 2024).
Suggested Answer: AI systems can be tricked by prompt injection or data poisoning, which can make them reveal secrets or behave badly. AI systems also store lots of personal data, making them attractive targets for attackers (Metomic, 2025; U.S. Department of Defense, 2025).
Suggested Answer: Two major challenges are the continued use of outdated or unsupported software, and the lack of clear rules for managing temporary accounts and permissions, which leave systems open to attack (PCPD, 2025; U.S. Department of Defense, 2025).
References
Fast
Company. (2024). The
number of data breach victims is up 490% in the first half of 2024.
https://www.fastcompany.com/91158122/data-breach-victims-up-490-percent-first-half-2024
IBM.
(2025). Unauthorized
use risk for AI.
https://www.ibm.com/docs/en/watsonx/saas?topic=atlas-unauthorized-use
Metomic.
(2025). Quantifying
the AI security risk: 2025 breach statistics and financial
implications.
https://www.metomic.io/resource-centre/quantifying-the-ai-security-risk-2025-breach-statistics-and-financial-implications
PCPD.
(2025). Privacy
Commissioner's Office publishes checklist on generative AI.
https://www.pcpd.org.hk/english/news_events/media_statements/press_20250331.html
U.S.
Department of Defense. (2025). Joint
Cybersecurity Information: AI Data Security
(CSI_AI_DATA_SECURITY.PDF).
https://media.defense.gov/2025/May/22/2003720601/-1/-1/0/CSI_AI_DATA_SECURITY.PDF
No comments:
Post a Comment