Ethics of AI in Decision-Making: Evaluating Biases in Machine Learning from Philosophical and Legal Perspectives
Abstract
Artificial Intelligence (AI) has increasingly been integrated into decision-making processes across various domains, including healthcare, criminal justice, finance, and employment. While AI promises efficiency and objectivity, concerns about embedded biases and ethical implications persist. This paper examines the ethical and legal dimensions of AI-driven decision-making, focusing on biases in machine learning (ML) models. Drawing from philosophical theories of justice, fairness, and moral responsibility, alongside legal frameworks governing AI, we evaluate how biases emerge, their societal impacts, and potential mitigation strategies. A systematic review of 57 scholarly works highlights the intersection of technology, ethics, and law, advocating for transparent, accountable, and equitable AI systems.
How to Cite This Article
Dr. William Thompson (2025). Ethics of AI in Decision-Making: Evaluating Biases in Machine Learning from Philosophical and Legal Perspectives . Global Multidisciplinary Perspectives Journal (GMPJ), 2(1), 15-16.