Cecez-Kecmanovic, D. (2025).
Information and Organization, 35(3), 100587.
Abstract
The grand humanist project of technological advancements has culminated in fascinating intelligent technologies and AI-based automated decision-making systems (ADMS) that replace human decision-makers in complex social processes. Widespread use of ADMS, underpinned by humanist values and ethics, it is claimed, not only contributes to more effective and efficient, but also to more objective, non-biased, fair, responsible, and ethical decision-making. Growing literature however shows paradoxical outcomes: ADMS use often discriminates against certain individuals and groups and produces detrimental and harmful social consequences. What is at stake is the reconstruction of reality in the image of ADMS, that threatens our existence and sociality. This presents a compelling motivation for this article which examines a) on what bases are ADMS claimed to be ethical, b) how do ADMS, designed and implemented with the explicit aim to act ethically, produce individually and socially harmful consequences, and c) can ADMS, or more broadly, automated algorithmic decision-making be ethical. This article contributes a critique of dominant humanist ethical theories underpinning the development and use of ADMS and demonstrates why such ethical theories are inadequate in understanding and responding to ADMS' harmful consequences and emerging ethical demands. To respond to such ethical demands, the article contributes a posthumanist relational ethics (that extends Barad's agential realist ethics with Zigon's relational ethics) that enables novel understanding of how ADMS performs harmful effects and why ethical demands of subjects of decision-making cannot be met. The article also explains why ADMS are not and cannot be ethical and why the very concept of automated decision-making in complex social processes is flowed and dangerous, threatening our sociality and humanity.
Here are some thoughts:
This article offers a critical posthumanist analysis of automated algorithmic decision-making systems (ADMS) and their ethical implications, with direct relevance for psychologists concerned with fairness, human dignity, and social justice. The author argues that despite claims of objectivity, neutrality, and ethical superiority, ADMS frequently reproduce and amplify societal biases—leading to discriminatory, harmful outcomes in domains like hiring, healthcare, criminal justice, and welfare. These harms stem not merely from flawed data or design, but from the foundational humanist assumptions underpinning both ADMS and conventional ethical frameworks (e.g., deontological and consequentialist ethics), which treat decision-making as a detached, rational process divorced from embodied, relational human experience. Drawing on Barad’s agential realism and Zigon’s relational ethics, the article proposes a posthumanist relational ethics that centers on responsiveness, empathic attunement, and accountability within entangled human–nonhuman assemblages. From this perspective, ADMS are inherently incapable of ethical decision-making because they exclude the very relational, affective, and contextual dimensions—such as compassion, dialogue, and care—that constitute ethical responsiveness in complex social situations. The article concludes that automating high-stakes human decisions is not only ethically untenable but also threatens sociality and humanity itself.


























