Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Attributions. Show all posts
Showing posts with label Attributions. Show all posts

Monday, June 10, 2024

Attributions toward artificial agents in a modified Moral Turing Test

Aharoni, E., Fernandes, S., Brady, D.J. et al.
Sci Rep 14, 8458 (2024).

Abstract

Advances in artificial intelligence (AI) raise important questions about whether people view moral evaluations by AI systems similarly to human-generated moral evaluations. We conducted a modified Moral Turing Test (m-MTT), inspired by Allen et al. (Exp Theor Artif Intell 352:24–28, 2004) proposal, by asking people to distinguish real human moral evaluations from those made by a popular advanced AI language model: GPT-4. A representative sample of 299 U.S. adults first rated the quality of moral evaluations when blinded to their source. Remarkably, they rated the AI’s moral reasoning as superior in quality to humans’ along almost all dimensions, including virtuousness, intelligence, and trustworthiness, consistent with passing what Allen and colleagues call the comparative MTT. Next, when tasked with identifying the source of each evaluation (human or computer), people performed significantly above chance levels. Although the AI did not pass this test, this was not because of its inferior moral reasoning but, potentially, its perceived superiority, among other possible explanations. The emergence of language models capable of producing moral responses perceived as superior in quality to humans’ raises concerns that people may uncritically accept potentially harmful moral guidance from AI. This possibility highlights the need for safeguards around generative language models in matters of morality.

Here is my summary:

The researchers conducted a modified Moral Turing Test (m-MTT) to investigate if people view moral evaluations by advanced AI systems similarly to those by humans. They had participants rate the quality of moral reasoning from the AI language model GPT-4 and from humans, while initially blinded to the source.

Key Findings
  • Remarkably, participants rated GPT-4's moral reasoning as superior in quality to humans' across dimensions like virtuousness, intelligence, and trustworthiness. This is consistent with passing the "comparative MTT" proposed previously.
  • When later asked to identify if the moral evaluations came from a human or computer, participants performed above chance levels.
  • However, GPT-4 did not definitively "pass" this test, potentially because its perceived superiority made it identifiable as AI.

Saturday, November 4, 2023

One strike and you’re a lout: Cherished values increase the stringency of moral character attributions

Rottman, J., Foster-Hanson, E., & Bellersen, S.
(2023). Cognition, 239, 105570.

Abstract

Moral dilemmas are inescapable in daily life, and people must often choose between two desirable character traits, like being a diligent employee or being a devoted parent. These moral dilemmas arise because people hold competing moral values that sometimes conflict. Furthermore, people differ in which values they prioritize, so we do not always approve of how others resolve moral dilemmas. How are we to think of people who sacrifice one of our most cherished moral values for a value that we consider less important? The “Good True Self Hypothesis” predicts that we will reliably project our most strongly held moral values onto others, even after these people lapse. In other words, people who highly value generosity should consistently expect others to be generous, even after they act frugally in a particular instance. However, reasoning from an error-management perspective instead suggests the “Moral Stringency Hypothesis,” which predicts that we should be especially prone to discredit the moral character of people who deviate from our most deeply cherished moral ideals, given the potential costs of affiliating with people who do not reliably adhere to our core moral values. In other words, people who most highly value generosity should be quickest to stop considering others to be generous if they act frugally in a particular instance. Across two studies conducted on Prolific (N = 966), we found consistent evidence that people weight moral lapses more heavily when rating others’ membership in highly cherished moral categories, supporting the Moral Stringency Hypothesis. In Study 2, we examined a possible mechanism underlying this phenomenon. Although perceptions of hypocrisy played a role in moral updating, personal moral values and subsequent judgments of a person’s potential as a good cooperative partner provided the clearest explanation for changes in moral character attributions. Overall, the robust tendency toward moral stringency carries significant practical and theoretical implications.


My take aways: 

The results showed that participants were more likely to rate the person as having poor moral character when the transgression violated a cherished value. This suggests that when we see someone violate a value that we hold dear, it can lead us to question their entire moral compass.

The authors argue that this finding has important implications for how we think about moral judgment. They suggest that our own values play a significant role in how we judge others' moral character. This is something to keep in mind the next time we're tempted to judge someone harshly.

Here are some additional points that are made in the article:
  • The effect of cherished values on moral judgment is stronger for people who are more strongly identified with their values.
  • The effect is also stronger for transgressions that are seen as more serious.
  • The effect is not limited to personal values. It can also occur for group-based values, such as patriotism or religious beliefs.

Saturday, March 19, 2022

The Content of Our Character

Brown, Teneille R.
Available at SSRN: https://ssrn.com/abstract=3665288

Abstract

The rules of evidence assume that jurors can ignore most character evidence, but the data are clear. Jurors simply cannot *not* make character inferences. We are so driven to use character to assess blame, that we will spontaneously infer traits based on whatever limited information is available. In fact, within just 0.1 seconds of meeting someone, we have already decided if we think they are intelligent, trustworthy, likable, or kind--based just on the person’s face. This is a completely unregulated source of evidence, and yet it predicts teaching evaluations, electoral success, and even sentencing decisions. Given the pervasive and unintentional nature of “spontaneous trait inferences” (STIs), they are not susceptible to mitigation through jury instructions. However, recognizing that witnesses will be viewed as more or less trustworthy based just on their face, the rules of evidence must permit more character evidence, rather than less. This article harnesses undisputed findings from social psychology to propose a reversal of the ban on character evidence, in favor of a strong presumption against admissibility for immoral traits only. This removes a great deal from the rule’s crosshairs and re-tethers it to its normative roots. My proposal does not rely on the gossamer thin distinction between propensity and non-propensity uses, because once jurors hear about past act evidence, they will subconsciously draw an impermissible character inference. However, in some cases this might not be unfairly prejudicial, and may even be necessary for justice. The critical contribution of this article is that while shielding jurors from character evidence has noble origins, it also has unintended, negative consequences. When jurors cannot hear about how someone acted in the past, they will instead rely on immutable facial features—connected to racist, sexist and classist stereotypes—to draw character inferences that are even more inaccurate and unfair.

Here is a section

Moral Character Impacts Ratings of Intent

Previous models of intentionality held that for an act to be considered intentional, three things had to be present. The actor must have believed that an action would result in a particular outcome, desired this outcome, and had full awareness of his behavior. Research now challenges this account, “showing that individuals attribute intentions to others even (and largely) in the absence of these components.”  Even where an actor could not have acted otherwise, and thus was coerced to kill, study participants found the actor to be more morally responsible for an act if he “identified” with it, meaning that he desired the compelled outcome. These findings do not fit with our typical model of blame, which requires freedom to act in order to assign responsibility.  However, they make sense if we adopt a character-based approach to
blame. We are quick to infer a bad character and intent when there is very little evidence of it.  

An example of this is the hindsight bias called the “praise-blame asymmetry,” where people blame actors for accidental bad outcomes that they caused but did not intend, but do not praise people for accidental good outcomes that they likewise caused but did not intend. The classic example is the CEO who considers a development project that will increase profits. The CEO is agnostic to the project’s environmental effects and gives it the go-ahead. If the project’s outcome turns out to harm the environment, people say the CEO intended the bad outcome and they blame him for it. However, if instead the project turns out to benefit the environment, the CEO receives no praise. Our folk conception of intentionality is tied to morality and aversion to negative outcomes. If a foreseen outcome is negative, people will attribute intentionality to the decision-maker, but not if the foreseen outcome is positive; the overattribution of intent only seems to cut one way. Mens rea ascriptions are “sensitive to moral valence . . . . If the outcome is negative, foreknowledge standardly suffices for people to ascribe intentionality.” This effect has been found not just in laypeople, but also in French judges. If an action is considered immoral, then our emotional reaction to it can bias mental state ascriptions.

Tuesday, February 15, 2022

How do people use ‘killing’, ‘letting die’ and related bioethical concepts? Contrasting descriptive and normative hypotheses

Rodríguez-Arias, D., et al., (2009)
Bioethics 34(5)
DOI:10.1111/bioe.12707

Abstract

Bioethicists involved in end-of-life debates routinely distinguish between ‘killing’ and ‘letting die’. Meanwhile, previous work in cognitive science has revealed that when people characterize behaviour as either actively ‘doing’ or passively ‘allowing’, they do so not purely on descriptive grounds, but also as a function of the behaviour’s perceived morality. In the present report, we extend this line of research by examining how medical students and professionals (N = 184) and laypeople (N = 122) describe physicians’ behaviour in end-of-life scenarios. We show that the distinction between ‘ending’ a patient’s life and ‘allowing’ it to end arises from morally motivated causal selection. That is, when a patient wishes to die, her illness is treated as the cause of death and the doctor is seen as merely allowing her life to end. In contrast, when a patient does not wish to die, the doctor’s behaviour is treated as the cause of death and, consequently, the doctor is described as ending the patient’s life. This effect emerged regardless of whether the doctor’s behaviour was omissive (as in withholding treatment) or commissive (as in applying a lethal injection). In other words, patient consent shapes causal selection in end-of-life situations, and in turn determines whether physicians are seen as ‘killing’ patients, or merely as ‘enabling’ their death.

From the Discussion

Across three  cases of  end-of-life  intervention, we find  convergent evidence  that moral  appraisals shape behavior description (Cushman et al., 2008) and causal selection (Alicke, 1992; Kominsky et al., 2015). Consistent  with  the  deontic  hypothesis,  physicians  who  behaved  according  to  patients’  wishes  were described as allowing the patient’s life to end. In contrast, physicians who disregarded the patient’s wishes were  described  as  ending the  patient’s  life.  Additionally,  patient  consent  appeared  to  inform  causal selection: The doctor was seen as the cause of death when disregarding the patient’s will; but the illness was seen as the cause of death when the doctor had obeyed the patient’s will.

Whether the physician’s behavior was omissive or commissive did not play a comparable role in behavior description or causal  selection. First, these  effects were weaker  than those of patient consent. Second,  while the  effects  of  consent  generalized to  medical  students  and  professionals,  the  effects of commission arose only among lay respondents. In other words, medical students and professionals treated patient consent as the sole basis for the doing/allowing distinction.  

Taken together, these  results confirm that  doing and  allowing serve a  fundamentally evaluative purpose (in  line with  the deontic  hypothesis,  and Cushman  et al.,  2008),  and only  secondarily serve  a descriptive purpose, if at all. 

Sunday, May 9, 2021

For Whom Does Determinism Undermine Moral Responsibility? Surveying the Conditions for Free Will Across Cultures

I. Hannikainen, et. al.
Front. Psychol., 05 November 2019
https://doi.org/10.3389/fpsyg.2019.02428

Abstract

Philosophers have long debated whether, if determinism is true, we should hold people morally responsible for their actions since in a deterministic universe, people are arguably not the ultimate source of their actions nor could they have done otherwise if initial conditions and the laws of nature are held fixed. To reveal how non-philosophers ordinarily reason about the conditions for free will, we conducted a cross-cultural and cross-linguistic survey (N = 5,268) spanning twenty countries and sixteen languages. Overall, participants tended to ascribe moral responsibility whether the perpetrator lacked sourcehood or alternate possibilities. However, for American, European, and Middle Eastern participants, being the ultimate source of one’s actions promoted perceptions of free will and control as well as ascriptions of blame and punishment. By contrast, being the source of one’s actions was not particularly salient to Asian participants. Finally, across cultures, participants exhibiting greater cognitive reflection were more likely to view free will as incompatible with causal determinism. We discuss these findings in light of documented cultural differences in the tendency toward dispositional versus situational attributions.

Discussion

At the aggregate level, we found that participants blamed and punished agents whether they only lacked alternate possibilities (Miller and Feltz, 2011) or whether they also lacked sourcehood (Nahmias et al., 2005; Nichols and Knobe, 2007). Thus, echoing early findings, laypeople did not take alternate possibilities or sourcehood as necessary conditions for free will and moral responsibility.

Yet, our study also revealed a dramatic cultural difference: Throughout the Americas, Europe, and the Middle East, participants viewed the perpetrator with sourcehood (in the CI scenario) as freer and more morally responsible than the perpetrator without sourcehood (in the AS scenario). Meanwhile, South and East Asian participants evaluated both perpetrators in a strikingly similar way. We interpreted these results in light of cultural variation in dispositional versus situational attributions (Miller, 1984; Morris and Peng, 1994; Choi et al., 1999; Chiu et al., 2000). From a dispositionist perspective, participants may be especially attuned to the absence of sourcehood: When an agent is the source of their action, people may naturally conjure dispositionist explanations that refer to her goals, desires (e.g., because “she wanted a new life”) or character (e.g., because “she is ruthless”). In contrast, when actions result from a causal chain originating at the beginning of the universe, explanations of this sort – implying sourcehood – seem particularly unsatisfactory and incomplete. In contrast, from a situationist perspective, whether the agent could be seen as the source of her action may be largely irrelevant: Instead, a situationist may think of others’ behavior as the product of extrinsic pressures – from momentary upheaval, to the way they were raised, social norms or fate – and thus perceive both agents, in the CI and AS cases, as similar in matters of free will and moral responsibility.

Monday, January 27, 2020

The Character of Causation: Investigating the Impact of Character, Knowledge, and Desire on Causal Attributions

Justin Sytsma
(2019) Preprint

Abstract

There is a growing consensus that norms matter for ordinary causal attributions. This has important implications for philosophical debates over actual causation. Many hold that theories of actual causation should coincide with ordinary causal attributions, yet those attributions often diverge from the theories when norms are involved. There remains substantive debate about why norms matter for causal attributions, however. In this paper, I consider two competing explanations—Alicke’s bias view, which holds that the impact of norms reflects systematic error (suggesting that ordinary causal attributions should be ignored in the philosophical debates), and our responsibility view, which holds that the impact of norms reflects the appropriate application of the ordinary concept of causation (suggesting that philosophical accounts are not analyzing the ordinary concept). I investigate one key difference between these views: the bias view, but not the responsibility view, predicts that “peripheral features” of the agents in causal scenarios—features that are irrelevant to appropriately assessing responsibility for an outcome, such as general character—will also impact ordinary causal attributions. These competing predictions are tested for two different types of scenarios. I find that information about an agent’s character does not impact causal attributions on its own. Rather, when character shows an effect it works through inferences to relevant features of the agent. In one scenario this involves inferences to the agent’s knowledge of the likely result of her action and her desire to bring about that result, with information about knowledge and desire each showing an independent effect on causal attributions.

From the Conclusion:

Alicke’s bias view holds that not only do features of the agent’s mental states matter, such as her knowledge and desires concerning the norm and the outcome, but also peripheral features of the agent whose impact could only reasonably be explained in terms of bias. In contrast, our responsibility view holds that the impact of norms does not reflect bias, but rather that ordinary causal attributions issue from the appropriate application of a concept with a normative component. As such, we predict that while judgments about the agent’s mental states that are relevant to adjudicating responsibility will matter, peripheral features of the agent will only matter insofar as they warrant an inference to other features of the agent that are relevant.

 In line with the responsibility view and against the bias view, the results of the studies presented in this paper suggest that information relevant to assessing an agent’s character matters but only when it warrants an inference to a non-peripheral feature, such as the agent’s negligence in the situation or her knowledge and desire with regard to the outcome. Further, the results indicate that information about an agent’s knowledge and desire both impact ordinary causal attributions in the scenario tested. This raises an important methodological issue for empirical work on ordinary causal attributions: researchers need to carefully consider and control for the inferences that participants might draw concerning the agents’ mental states and motivations.

The research is here.

Tuesday, January 14, 2020

Exceptionality Effect in Agency: Exceptional Choices Attributed Higher Free Will Than Routine

Fillon, A, Lantian, A., Feldman, G., & N'gbala, A.
PsyArXiv
Originally posted on 9 Nov 19

Abstract

Exceptionality effect is the widely cited phenomenon that people experience stronger regret regarding negative outcomes that are a result of more exceptional circumstances, compared to routine. We hypothesize that the exceptionality-routine attribution asymmetry would extend to attributions of freedom and responsibility. In Experiment 1 (N = 338), we found that people attributed more free will to exceptional behavior compared to routine, when the exception was due to self-choice rather than due to external circumstances. In Experiment 2 (N = 561), we replicated and generalized the effect of exceptionality on attributions of free will to other scenarios, with support for the classic exceptionality effect regarding regret and an extension to moral responsibility. In Experiment 3 (N = 128), we replicated these effects in a within-subject design. When using a classic experimental philosophy paradigm contrasting a deterministic and an indeterministic universe, we found that the results were robust across both contexts. We conclude that there is a consistent support for a link between exceptionality and free will attributions.

From the Conclusion:

Although based on different theoretical frameworks, our results on attributions of free will could be related to the findings of Bear and Knobe (2016). They found that a behavior that was performed “actively” rather than “passively” modifies people’s judgment about the compatibility of this behavior with causal determinism thesis. More concretely, people consider that a behavior performed actively (such as composing a highly technical legal document) is less possible (i.e., less compatible) in a causally deterministic universe than a behavior performed passively (such as impulsively shoplifting from a convenience store; Bear & Knobe, 2016). According to Bear and Knobe (2016), people relied on two cues to determine the active or passive feature of a behavior: mental effort and spontaneity (Bear & Knobe, 2016). By adopting this framework, we may assimilate an exceptional behavior to an active behavior (because its “breaking off from the flow of things,” and require mental effort and spontaneity) and a routine behavior to a passive effort (because it is “going with the flow,” and does not require a mental effort or spontaneity). In the same vein, an agent acting spontaneously is considered freer than an agent acting deliberately (Vierkantet al., 2019). Despite the fact that Vierkant et al. (2019) manipulated the agent’s choice (spontaneous vs. deliberate) in a within-design their study, it may suggest that when deliberation (or mental effort) and spontaneity are experimentally contrasted, it is spontaneity that seems to be the driving force behind the increase of perceived free will of the agent.

The research is here.

Wednesday, August 28, 2019

Asymmetrical genetic attributions for prosocial versus antisocial behaviour

Matthew S. Lebowitz, Kathryn Tabb &
Paul S. Appelbaum
Nature Human Behaviour (2019)

Abstract

Genetic explanations of human behaviour are increasingly common. While genetic attributions for behaviour are often considered relevant for assessing blameworthiness, it has not yet been established whether judgements about blameworthiness can themselves impact genetic attributions. Across six studies, participants read about individuals engaging in prosocial or antisocial behaviour, and rated the extent to which they believed that genetics played a role in causing the behaviour. Antisocial behaviour was consistently rated as less genetically influenced than prosocial behaviour. This was true regardless of whether genetic explanations were explicitly provided or refuted. Mediation analyses suggested that this asymmetry may stem from people’s motivating desire to hold wrongdoers responsible for their actions. These findings suggest that those who seek to study or make use of genetic explanations’ influence on evaluations of, for example, antisocial behaviour should consider whether such explanations are accepted in the first place, given the possibility of motivated causal reasoning.

The research is here.

Saturday, August 3, 2019

When Do Robots Have Free Will? Exploring the Relationships between (Attributions of) Consciousness and Free Will

Eddy Nahmias, Corey Allen, & Bradley Loveall
Georgia State University

From the Conclusion:

If future research bolsters our initial findings, then it would appear that when people consider whether agents are free and responsible, they are considering whether the agents have capacities to feel emotions more than whether they have conscious sensations or even capacities to deliberate or reason. It’s difficult to know whether people assume that phenomenal consciousness is required for or enhances capacities to deliberate and reason. And of course, we do not deny that cognitive capacities for self-reflection, imagination, and reasoning are crucial for free and responsible agency (see, e.g., Nahmias 2018). For instance, once considering agents that are assumed to have phenomenal consciousness, such as humans, it is likely that people’s attributions of free will and responsibility decrease in response to information that an agent has severely diminished reasoning capacities. But people seem to have intuitions that support the idea that an essential condition for free will is the capacity to experience conscious emotions.  And we find it plausible that these intuitions indicate that people take it to be essential to being a free agent that one can feel the emotions involved in reactive attitudes and in genuinely caring about one’s choices and their outcomes.

(cut)

Perhaps, fiction points us towards the truth here. In most fictional portrayals of artificial intelligence and robots (such as Blade Runner, A.I., and Westworld), viewers tend to think of the robots differently when they are portrayed in a way that suggests they express and feel emotions.  No matter how intelligent or complex their behavior, the robots do not come across as free and autonomous until they seem to care about what happens to them (and perhaps others). Often this is portrayed by their showing fear of their own or others’ deaths, or expressing love, anger, or joy. Sometimes it is portrayed by the robots’ expressing reactive attitudes, such as indignation about how humans treat them, or our feeling such attitudes towards them, for instance when they harm humans.

The research paper is here.

Tuesday, October 11, 2016

When fairness matters less than we expect

Gus Cooney, Daniel T. Gilbert, and Timothy D. Wilson
PNAS 2016 ; published ahead of print September 16, 2016

Abstract

Do those who allocate resources know how much fairness will matter to those who receive them? Across seven studies, allocators used either a fair or unfair procedure to determine which of two receivers would receive the most money. Allocators consistently overestimated the impact that the fairness of the allocation procedure would have on the happiness of receivers (studies 1–3). This happened because the differential fairness of allocation procedures is more salient before an allocation is made than it is afterward (studies 4 and 5). Contrary to allocators’ predictions, the average receiver was happier when allocated more money by an unfair procedure than when allocated less money by a fair procedure (studies 6 and 7). These studies suggest that when allocators are unable to overcome their own preallocation perspectives and adopt the receivers’ postallocation perspectives, they may allocate resources in ways that do not maximize the net happiness of receivers.

Significance

Human beings care a great deal about the fairness of the procedures that are used to allocate resources, such as wealth, opportunity, and power. But in a series of experiments, we show that those to whom resources are allocated often care less about fairness than those who allocate the resources expect them to. This “allocator’s illusion” results from the fact that fairness seems more important before an allocation is made (when allocators are choosing a procedure) than afterward (when receivers are reacting to the procedure that allocators chose). This illusion has important consequences for policy-makers, managers, health care providers, judges, teachers, parents, and others who are charged with choosing the procedures by which things of value will be allocated.

The article is here.

Friday, February 13, 2015

Me, My “Self” and You: Neuropsychological Relations between Social Emotion, Self-Awareness, and Morality

By Mary Helen Immordino-Yang
Emotion Review July 2011 vol. 3 no. 3 313-315

Abstract

Social emotions about others’ mind states, for example, compassion for psychological pain or admiration for virtue, are an important foundation for morality because they help us decide how to treat other people. Although these emotions are ostensibly concerned with the mental qualities and situations of others, they can precipitate intimately subjective reflections on the quality of one’s own social life and mind, and via these reflections incite a desire to engage in meaningful moral actions. Our interview and neural data suggest that the shift from social emotion to introspection may be facilitated by conscious mental evaluation of emotion-related visceral sensations.

The entire paper is here.