UNCOVERING HIDDEN BIASES IN MACHINE LEARNING MODELS: A STEP TOWARD ETHICAL AI

Authors

  • Aniket Maity School of Computer Engineering, Kalinga Institute of Industrial Technology, Bhubaneswar, Odisha Author

DOI:

https://doi.org/10.63503/c.acset.2025.17

Keywords:

Algorithmic Bias, Fairness in Machine Learning, Bias Mitigation, Ethical AI, Demographic Parity, Equal Opportunity, Disparate Impact Ratio

Abstract

With the widespread use of artificial intelligence (AI) systems in everyday applications, ensuring fairness has become a critical aspect of system design. AI models increasingly influence high-stakes decisions in domains such as finance, hiring, and criminal justice, where biased outcomes can have severe societal consequences. Recent developments in machine learning and deep learning have begun to address these challenges, emphasizing the need for systematic bias mitigation strategies.This study presents a structured framework for bias detection and mitigation in machine learning, aiming to reconcile predictive performance with algorithmic fairness. Using the UCI Adult dataset as a case study, the framework adopts a dual-phase approach: preprocessing with Reweighing to correct historical imbalances and postprocessing with Threshold Optimization to adjust decision thresholds for protected and unprotected groups. This combination addresses both data-level and decision-level disparities without requiring complete model retraining.The framework was evaluated across four supervised models—k-Nearest Neighbors, Decision Tree, Logistic Regression, and Random Forest. Mitigated models achieved substantial reductions in Demographic Parity Difference and Equal Opportunity Difference, alongside improved Disparate Impact Ratios, while maintaining competitive accuracy, precision, recall, and F1-score. These findings demonstrate that fairness-aware interventions can reduce group-level bias without critical performance loss, contributing to the development of trustworthy and socially responsible AI systems.

References

1. A. Agarwal, M. Dudik, and Z. S. Wu, “Fair regression: Quantitative definitions and reduction-based algorithms,” in Proc. Int. Conf. Mach. Learn., 2019, pp. 120–129.

2. S. Aghaei, M. J. Azizi, and P. Vayanos, “Learning optimal and fair decision trees for nondiscriminative decision-making,” in Proc. AAAI Conf. Artif. Intell., vol. 33, no. 1, 2019, pp. 1418–1426.

3. N. Alipourfard, P. G. Fennell, and K. Lerman, “Can you trust the trend? Discovering Simpson’s paradoxes in social data,” in Proc. 11th ACM Int. Conf. Web Search Data Mining, 2018, pp. 19–27.

4. N. Alipourfard, P. G. Fennell, and K. Lerman, “Using Simpson’s paradox to discover interesting patterns in behavioral data,” in Proc. 12th AAAI Conf. Web Social Media, 2018.

5. A. Amini, A. Soleimany, W. Schwarting, S. Bhatia, and D. Rus, “Uncovering and mitigating algorithmic bias through learned latent structure,” 2019.

6. J. Angwin, J. Larson, S. Mattu, and L. Kirchner, “Machine bias: There’s software used across the country to predict future criminals—and it’s biased against Blacks,” ProPublica, 2016. [Online]. Available: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

7. A. Asuncion and D. J. Newman, “UCI machine learning repository,” Univ. California, Irvine, School Inf. Comput. Sci., 2007. [Online]. Available: http://www.ics.uci.edu/~mlearn/MLRepository.html

8. A. Backurs, P. Indyk, K. Onak, B. Schieber, A. Vakilian, and T. Wagner, “Scalable fair clustering,” in Proc. 36th Int. Conf. Mach. Learn., vol. 97, 2019, pp. 405–413. [Online]. Available: http://proceedings.mlr.press/v97/backurs19a.html

9. R. Baeza-Yates, “Bias on the web,” Commun. ACM, vol. 61, no. 6, pp. 54–61, 2018. [Online]. Available: https://doi.org/10.1145/3209581

10. S. Barbosa, D. Cosley, A. Sharma, and R. M. Cesar-Jr., “Averaging gone wrong: Using time-aware analyses to better understand behavior,” in Proc. Int. AAAI Conf. Web Social Media, 2016, pp. 829–841.

11. R. K. E. Bellamy et al., “AI Fairness 360: An extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias,” arXiv preprint arXiv:1810.01943, 2018.

12. E. M. Bender and B. Friedman, “Data statements for natural language processing: Toward mitigating system bias and enabling better science,” Trans. Assoc. Comput. Linguist., vol. 6, pp. 587–604, 2018. [Online]. Available: https://doi.org/10.1162/tacl_a_00041

Downloads

Published

2025-11-24

How to Cite

Aniket Maity. (2025). UNCOVERING HIDDEN BIASES IN MACHINE LEARNING MODELS: A STEP TOWARD ETHICAL AI. Adroid Conference Series: Engineering and Technology, 1, 162-172. https://doi.org/10.63503/c.acset.2025.17