Peters, U.
AI Ethics 3, 963–974 (2023).
Abstract
Many artificial intelligence (AI) systems currently used for decision-making are opaque, i.e., the internal factors that determine their decisions are not fully known to people due to the systems’ computational complexity. In response to this problem, several researchers have argued that human decision-making is equally opaque and since simplifying, reason-giving explanations (rather than exhaustive causal accounts) of a decision are typically viewed as sufficient in the human case, the same should hold for algorithmic decision-making. Here, I contend that this argument overlooks that human decision-making is sometimes significantly more transparent and trustworthy than algorithmic decision-making. This is because when people explain their decisions by giving reasons for them, this frequently prompts those giving the reasons to govern or regulate themselves so as to think and act in ways that confirm their reason reports. AI explanation systems lack this self-regulative feature. Overlooking it when comparing algorithmic and human decision-making can result in underestimations of the transparency of human decision-making and in the development of explainable AI that may mislead people by activating generally warranted beliefs about the regulative dimension of reason-giving.
The article is linked above.
Here are some thoughts:
This article delves into the ethics of transparency in algorithmic decision-making (ADM) versus human decision-making (HDM). While both can be opaque, the author argues HDM offers more trustworthiness due to "mindshaping." This theory suggests that explaining decisions, even if the thought process is unclear, influences future human behavior to align with the explanation. This self-regulation is absent in AI, potentially making opaque ADM less trustworthy. The text emphasizes that explanations serve a purpose beyond just understanding the process. It raises concerns about "deceptive AI" with misleading explanations and warns against underestimating the transparency of HDM due to its inherent ability to explain. Key ethical considerations include the need for further research on mindshaping's impact on bias and the limitations of explanations in both ADM and HDM. Ultimately, the passage highlights the importance of developing explainable AI that goes beyond mere justification, while also emphasizing fairness, accountability, and responsible use of explanations in building trustworthy AI systems.