Sunday, June 29, 2025

Privacy and Artificial Intelligence - 1.5 Algorithmic Bias and Discrimination

1.5 Algorithmic Bias and Discrimination

Introduction

Technology is supposed to help everyone, but sometimes computer programs and AI systems make decisions that are unfair. Unfair decisions happen when an AI system treats some people better or worse than others for reasons that are not justified, such as their gender, race, age, or where they live (Mehrabi et al., 2019; Greenlining Institute, 2021). For example, an AI might give a boy a better chance at getting a loan than a girl, even if they have the same background, or it might show certain job ads only to younger people and not older ones. These unfair decisions can mean some people get more opportunities, like jobs or loans, while others are left out, even though they deserve the same chance. When this happens, it is called algorithmic bias. If these unfair decisions lead to people being treated differently or unjustly, it is called discrimination (Barocas, Hardt, & Narayanan, 2019).

Technical or Conceptual Background

AI systems learn from data, just as children learn from examples. But if the data they learn from has mistakes or reflects unfair patterns from the real world, the AI can learn those biases too (Friedman & Nissenbaum, 1996). For instance, if a hiring AI is trained on resumes from a company that mostly hired men in the past, it may start favoring male candidates and ignore equally qualified women (Bolukbasi et al., 2016; Crescendo.ai, 2025). This is algorithmic bias.

Discrimination in AI can be direct or indirect. Direct discrimination happens when the AI uses sensitive information like race or gender to make decisions. Indirect discrimination happens when the AI uses other information that is closely connected to sensitive traits, such as a person’s zip code or school, which can still lead to unfair outcomes (Barocas & Selbst, 2016).

Current Trends and Challenges

Algorithmic bias is now a well-known problem in many real-world AI systems. For example, Amazon stopped using an AI recruiting tool after discovering it downgraded resumes containing the word “women’s” or from all-women’s colleges, making it harder for women to get jobs (Crescendo.ai, 2025; Dastin, 2018). In the U.S. justice system, the COMPAS tool was found to label Black defendants as high risk for reoffending almost twice as often as white defendants, even when they did not reoffend (IBM, 2024). In healthcare, a widely used AI system favored white patients over Black patients when deciding who needed extra care, because it used healthcare spending as a measure of need—ignoring the fact that Black patients historically had less access to care (Crescendo.ai, 2025).

Unfair decisions also happen in finance and advertising. Apple’s credit card AI gave women lower credit limits than men, even when women had higher credit scores and incomes (Crescendo.ai, 2025). Facebook’s job ad system let employers exclude older workers from seeing job postings, making it harder for older adults to find work (Crescendo.ai, 2025). In education, some college admissions algorithms have made it harder for students from diverse backgrounds to get accepted, because they learned from data that favored certain groups (Every Learner Everywhere, 2023).

Bias can also appear in how AI systems generate images or recognize faces. For example, facial recognition systems have been shown to make more mistakes with people who have darker skin, especially women, leading to misidentification and unfair treatment (Buolamwini & Gebru, 2018; Crescendo.ai, 2025).

Mitigation Challenges and Shortcomings

Fixing algorithmic bias is not easy. One challenge is that AI systems are often complex and not transparent, making it difficult to see how decisions are made or to spot bias (Raji et al., 2020). Bias can come from many places: the data, the way the AI is programmed, or even the people who build the AI (Mehrabi et al., 2019). Sometimes, organizations do not have the expertise or resources to check their AI systems for bias or discrimination (Raji et al., 2020).

Even when bias is found, efforts to fix it can be complicated. Making an AI system more fair can sometimes make it less accurate, or it can fix one kind of bias but ignore others (Corbett-Davies et al., 2017). There is also the risk of treating bias as only a technical issue, when it is also a social and ethical problem that requires listening to the people affected (Barocas et al., 2019). Finally, there is no single definition of fairness, so it can be hard to agree on what counts as a fair or unfair decision (Chouldechova, 2017; Human Rights Commission, 2020).

Glossary

Term

Meaning and Example Sentence

Algorithmic Bias

When AI systems make unfair decisions based on data patterns. Example: "The AI showed algorithmic bias by favoring some groups over others."

Discrimination

Treating people unfairly because of who they are. Example: "Discrimination means not giving someone a chance because of their race."

Fairness

Treating everyone equally and justly. Example: "Fairness means giving everyone the same chance to play."

Sensitive Attributes

Personal traits like race, gender, or age. Example: "The AI should not use sensitive attributes to make decisions."

Fairness-Aware Machine Learning

Techniques to make AI decisions more fair. Example: "Fairness-aware machine learning helps reduce bias in AI."

Questions

  1. What does it mean when an AI system makes an unfair decision?

  2. How can AI systems unintentionally discriminate against people?

  3. What are some real-life examples of algorithmic bias?

  4. Why is it difficult to detect and fix bias in AI systems?

  5. What are some challenges in reducing algorithmic bias?

Answer Key

  1. Suggested Answer: An unfair decision is when an AI system treats some people better or worse than others for reasons that are not justified, such as their race, gender, or age. This can mean some people get more opportunities while others are left out, even if they deserve the same chance (Mehrabi et al., 2019; Greenlining Institute, 2021).

  2. Suggested Answer: AI systems can unintentionally discriminate when they learn from biased data or use sensitive attributes directly or indirectly, leading to unfair treatment of certain groups (Barocas & Selbst, 2016).

  3. Suggested Answer: Examples include Amazon’s AI recruiting tool that downgraded women’s resumes, the COMPAS tool labeling Black defendants as high risk, and facial recognition systems making more mistakes with people who have darker skin (Crescendo.ai, 2025; IBM, 2024; Buolamwini & Gebru, 2018).

  4. Suggested Answer: It is difficult to detect and fix bias because AI systems are complex and often not transparent, and bias can come from data, programming, or the people who build the AI (Raji et al., 2020; Mehrabi et al., 2019).

  5. Suggested Answer: Challenges include trade-offs between fairness and accuracy, lack of resources to audit AI, and the need to consider social and ethical issues, not just technical ones (Corbett-Davies et al., 2017; Barocas et al., 2019).

References

Barocas, S., & Selbst, A. D. (2016). Big data's disparate impact. California Law Review, 104(3), 671-732. https://doi.org/10.2139/ssrn.2477899
Barocas, S., Hardt, M., & Narayanan, A. (2019).
Fairness and machine learning. fairmlbook.org.
Bolukbasi, T., Chang, K. W., Zou, J. Y., Saligrama, V., & Kalai, A. T. (2016). Man is to computer programmer as woman is to homemaker? Debiasing word embeddings.
Advances in Neural Information Processing Systems, 29, 4349-4357. https://arxiv.org/abs/1607.06520
Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification.
Proceedings of Machine Learning Research, 81, 1-15. http://proceedings.mlr.press/v81/buolamwini18a.html
Chouldechova, A. (2017). Fair prediction with disparate impact: A study of bias in recidivism prediction instruments.
Big Data, 5(2), 153-163. https://doi.org/10.1089/big.2016.0047
Corbett-Davies, S., Pierson, E., Feller, A., Goel, S., & Huq, A. (2017). Algorithmic decision making and the cost of fairness.
Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 797-806. https://doi.org/10.1145/3097983.3098095
Crescendo.ai. (2025, June 2). 10 Real AI Bias Examples & Mitigation Guide. https://www.crescendo.ai/blog/ai-bias-examples-mitigation-guide
Dastin, J. (2018, October 10). Amazon scraps secret AI recruiting tool that showed bias against women.
Reuters. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G
Every Learner Everywhere. (2023, June 8). What Are the Risks of Algorithmic Bias in Higher Education? https://www.everylearnereverywhere.org/blog/what-are-the-risks-of-algorithmic-bias-in-higher-education/
Friedman, B., & Nissenbaum, H. (1996). Bias in computer systems.
ACM Transactions on Information Systems, 14(3), 330-347. https://doi.org/10.1145/230538.230561
Greenlining Institute. (2021, February). Algorithmic Bias Explained. https://greenlining.org/wp-content/uploads/2021/04/Greenlining-Institute-Algorithmic-Bias-Explained-Report-Feb-2021.pdf
IBM. (2024, September 20). What Is Algorithmic Bias? https://www.ibm.com/think/topics/algorithmic-bias
Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2019). A survey on bias and fairness in machine learning.
ACM Computing Surveys, 54(6), 1-35. https://doi.org/10.1145/3457607
Raji, I. D., Smart, A., White, R. N., Mitchell, M., & Gebru, T. (2020). Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing.
Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 33-44. https://doi.org/10.1145/3351095.3372873




No comments: