Demaree-Cotton, J., Earp, B. D., & Savulescu, J.
(2022). American Journal of Bioethics, 22(7), 1–3.
Here is an excerpt:
The kind of AI proposed by Meier and colleagues (2022) has the fascinating potential to improve the transparency of ethical decision-making, at least if it is used as a decision aid rather than a decision replacement (Savulescu & Maslen 2015). While artificial intelligence cannot itself engage in the human communicative process of justifying its decisions to patients, the AI they describe (unlike “black-box” AI) makes explicit which values and principles are involved and how much weight they are given.
By contrast, the moral principles or values underlying human moral intuition are not always consciously, introspectively accessible (Cushman, Young, and Hauser 2006). While humans sometimes have a fuzzy, intuitive sense of some of the factors that are relevant to their moral judgment, we often have strong moral intuitions without being sure of their source, or with- out being clear on precisely how strongly different factors played a role in generating the intuitions. But if clinicians make use of the AI as a decision aid, this could help them to transparently and precisely communicate the actual reasons behind their decision.
This is so even if the AI’s recommendation is ultimately rejected. Suppose, for example, that the AI recommends a course of action, with a certain amount of confidence, and it specifies the exact values or weights it has assigned to autonomy versus beneficence in coming to this conclusion. Evaluating the recommendation made by the AI could help a committee make more explicit the “black box” aspects of their own reasoning. For example, the committee might decide that beneficence should actually be weighted more heavily in this case than the AI suggests. Being able to understand the reason that their decision diverges from that of the AI gives them the opportunity to offer a further justifying reason as to why they think beneficence should be given more weight; and this, in turn, could improve the transparency of their recommendation.
However, the potential for the kind of AI described in the target article to improve the accuracy of moral decision-making may be more limited. This is so for two reasons. Firstly, whether AI can be expected to outperform human decision-making depends in part on the metrics used to train it. In non-ethical domains, superior accuracy can be achieved because the “verdicts” given to the AI in the training phase are not solely the human judgments that the AI is intended to replace or inform. Consider how AI can learn to detect lung cancer from scans at a superior rate to human radiologists after being trained on large datasets and being “told” which scans show cancer and which ones are cancer-free. Importantly, this training includes cases where radiologists did not recognize cancer in the early scans themselves, but where further information verified the correct diagnosis later on (Ardila et al. 2019). Consequently, these AIs are able to detect patterns even in early scans that are not apparent or easily detectable by human radiologists, leading to superior accuracy compared to human performance.