Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Responsibility. Show all posts
Showing posts with label Responsibility. Show all posts

Friday, May 22, 2020

Is identity illusory?

Andreas L. Mogensen
European Journal of Philosophy
First published 29 April 2020

Abstract

Certain of our traits are thought more central to who we are: they comprise our individual identity. What makes these traits privileged in this way? What accounts for their identity centrality? Although considerations of identity play a key role in many different areas of moral philosophy, I argue that we currently have no satisfactory account of the basis of identity centrality. Nor should we expect one. Rather, we should adopt an error theory: we should concede that there is nothing in reality corresponding to the perceived distinction between the central and peripheral traits of a person.

Here is an excerpt:

Considerations of identity play a key role in many different areas of contemporary moral philosophy. The following is not intended as an exhaustive survey. I will focus on just four key issues: the ethics of biomedical enhancement; blame and responsibility; constructivist theories in meta‐ethics; and the value of moral testimony.

The wide‐ranging moral importance of individual identity plausibly reflects its intimate connection to the ethics of authenticity (Taylor, 1991). To a first approximation, authenticity is achieved when the way a person lives is expressive of her most centrally defining traits. Inauthenticity occurs when she fails to give expression to these traits. The key anxiety attached to the ideal of authenticity is that the conditions of modern life conspire to mask the true self beneath the demands of social conformity and the enticements of mass culture (Riesman, Glazer, & Denney, 1961/2001; Rousseau, 1782/2011). In spite of this perceived incongruity, authenticity is considered one of the constitutive ideals of modernity (Guignon, 2004; Taylor, 1989, 1991).

Considerations of authenticity have played a key role in recent debates on human enhancement (Juth, 2011). The specific type of enhancement at issue here is cosmetic psychopharmacology: the use of psychiatric drugs to bring about changes in mood and personality, allowing already healthy individuals to lead happier and more successful lives by becoming less shy, more confident, etc. (Kramer, 1993). Many find cosmetic psychopharmacology disturbing. In an influential paper, Elliott (1998) suggests that what disturbs us is the apparent inauthenticity involved in this kind of personal transformation: the pursuit of a new, enhanced personality represents a flight from the real you. Defenders of enhancement charge that Elliott's concern rests on a mistaken conception of identity. DeGrazia (2000, 2005) argues that Elliott fails to appreciate the extent to which a person's identity is determined by her own reflexive attitudes. Because of the authoritative role assigned to a person's self‐conception, DeGrazia concludes that if a person wholeheartedly desires to change some aspect of herself, she cannot meaningfully be accused of inauthenticity.

The paper is here.

Wednesday, April 29, 2020

Characteristics of Faculty Accused of Academic Sexual Misconduct in the Biomedical and Health Sciences

Espinoza M, Hsiehchen D.
JAMA. 2020;323(15):1503–1505.
doi:10.1001/jama.2020.1810

Abstract

Despite protections mandated in educational environments, unwanted sexual behaviors have been reported in medical training. Policies to combat such behaviors need to be based on better understanding of the perpetrators. We characterized faculty accused of sexual misconduct resulting in institutional or legal actions that proved or supported guilt at US higher education institutions in the biomedical and health sciences.

Discussion

Of biomedical and health sciences faculty accused of sexual misconduct resulting in institutional or legal action, a majority were full professors, chairs or directors, or deans. Sexual misconduct was rarely an isolated event. Accused faculty frequently resigned or remained in academics, and few were sanctioned by governing boards.

Limitations include that only data on accused faculty who received media attention or were involved in legal proceedings were captured. In addition, the duration of behaviors, the exact number of targets, and the outcome data could not be identified for all accused faculty. Thus, this study cannot determine the prevalence of faculty who commit sexual misconduct, and the characteristics may not be generalizable across institutions.

The lack of transparency in investigations suggests that misconduct behaviors may not have been wholly captured by the public documents. Efforts to eliminate nondisclosure agreements are needed to enhance transparency. Further work is needed on mechanisms to prevent sexual misconduct at teaching institutions.

The info is here.

Monday, April 27, 2020

Drivers are blamed more than their automated cars when both make mistakes

Awad, E., Levine, S., Kleiman-Weiner, M. et al.
Nat Hum Behav 4, 134–143 (2020).
https://doi.org/10.1038/s41562-019-0762-8

Abstract

When an automated car harms someone, who is blamed by those who hear about it? Here we asked human participants to consider hypothetical cases in which a pedestrian was killed by a car operated under shared control of a primary and a secondary driver and to indicate how blame should be allocated. We find that when only one driver makes an error, that driver is blamed more regardless of whether that driver is a machine or a human. However, when both drivers make errors in cases of human–machine shared-control vehicles, the blame attributed to the machine is reduced. This finding portends a public under-reaction to the malfunctioning artificial intelligence components of automated cars and therefore has a direct policy implication: allowing the de facto standards for shared-control vehicles to be established in courts by the jury system could fail to properly regulate the safety of those vehicles; instead, a top-down scheme (through federal laws) may be called for.

From the Discussion:

Our central finding (diminished blame apportioned to the machine in dual-error cases) leads us to believe that, while there may be many psychological barriers to self-driving car adoption19, public over-reaction to dual-error cases is not likely to be one of them. In fact, we should perhaps be concerned about public underreaction. Because the public are less likely to see the machine as being at fault in dual-error cases like the Tesla and Uber crashes, the sort of public pressure that drives regulation might be lacking. For instance, if we were to allow the standards for automated vehicles to be set through jury-based court-room decisions, we expect that juries will be biased to absolve the car manufacturer of blame in dual-error cases, thereby failing to put sufficient pressure on manufacturers to improve car designs.

The article is here.

Monday, April 20, 2020

Europe plans to strictly regulate high-risk AI technology

Nicholas Wallace
sciencemag.org
Originally published 19 Feb 20

Here is an excerpt:

The commission wants binding rules for “high-risk” uses of AI in sectors like health care, transport, or criminal justice. The criteria to determine risk would include considerations such as whether someone could get hurt—by a self-driving car or a medical device, for example—or whether a person has little say in whether they’re affected by a machine’s decision, such as when AI is used in job recruitment or policing.

For high-risk scenarios, the commission wants to stop inscrutable “black box” AIs by requiring human oversight. The rules would also govern the large data sets used in training AI systems, ensuring that they are legally procured, traceable to their source, and sufficiently broad to train the system. “An AI system needs to be technically robust and accurate in order to be trustworthy,” the commission’s digital czar Margrethe Vestager said at the press conference.

The law will also establish who is responsible for an AI system’s actions—such as the company using it, or the company that designed it. High-risk applications would have to be shown to be compliant with the rules before being deployed in the European Union.

The commission also plans to offer a “trustworthy AI” certification, to encourage voluntary compliance in low-risk uses. Certified systems later found to have breached the rules could face fines.

The info is here.

Friday, April 17, 2020

Toward equipping Artificial Moral Agents with multiple ethical theories

George Rautenbach and C. Maria Keet
arXiv:2003.00935v1 [cs.CY] 2 Mar 2020

Abstract

Artificial Moral Agents (AMA’s) is a field in computer science with the purpose of creating autonomous machines that can make moral decisions akin to how humans do. Researchers have proposed theoretical means of creating such machines, while philosophers have made arguments as to how these machines ought to behave, or whether they should even exist.

Of the currently theorised AMA’s, all research and design has been done with either none or at most one specified normative ethical theory as basis. This is problematic because it narrows down the AMA’s functional ability and versatility which in turn causes moral outcomes that a limited number of people agree with (thereby undermining an AMA’s ability to be moral in a human sense). As solution we design a three-layer model for general normative ethical theories that can be used to serialise the ethical views of people and businesses for an AMA to use during reasoning. Four specific ethical norms (Kantianism, divine command theory, utilitarianism, and egoism) were modelled and evaluated as proof of concept for normative modelling. Furthermore, all models were serialised to XML/XSD as proof of support for computerisation.

From the Discussion:

A big philosophical grey area in AMA’s is with regards to agency. That is, an entity’s ability to
understand available actions and their moral values and to freely choose between them. Whether
or not machines can truly understand their decisions and whether they can be held accountable
for them is a matter of philosophical discourse. Whatever the answer may be, AMA agency
poses a difficult question that must be addressed.

The question is as follows: should the machine act as an agent itself, or should it act as an informant for another agent? If an AMA reasons for another agent (e.g., a person) then reasoning will be done with that person as the actor and the one who holds responsibility. This has the disadvantage of putting that person’s interest before other morally considerable entities, especially with regards to ethical theories like egoism. Making the machine the moral agent has the advantage of objectivity where multiple people are concerned, but makes it harder to assign blame for its actions - a machine does not care for imprisonment or even disassembly. A Luddite would say it has no incentive to do good to humanity. Of course, a deterministic machine does not need incentive at all, since it will always behave according to the theory it is running. This lack of fear or “personal interest” can be good, because it ensures objective reasoning and fair consideration of affected parties.

The paper is here.

Wednesday, February 12, 2020

Empirical Work in Moral Psychology

Joshua May
Routledge Encyclopedia of Philosophy
Taylor and Francis
Originally published in 2017

Abstract

How do we form our moral judgments, and how do they influence behaviour? What ultimately motivates kind versus malicious action? Moral psychology is the interdisciplinary study of such questions about the mental lives of moral agents, including moral thought, feeling, reasoning and motivation. While these questions can be studied solely from the armchair or using only empirical tools, researchers in various disciplines, from biology to neuroscience to philosophy, can address them in tandem. Some key topics in this respect revolve around moral cognition and motivation, such as moral responsibility, altruism, the structure of moral motivation, weakness of will, and moral intuitions. Of course there are other important topics as well, including emotions, character, moral development, self-deception, addiction, well-being, and the evolution of moral capacities.

Some of the primary objects of study in moral psychology are the processes driving moral action. For example, we think of ourselves as possessing free will, as being responsible for what we do; as capable of self-control; and as capable of genuine concern for the welfare of others. Such claims can be tested by empirical methods to some extent in at least two ways. First, we can determine what in fact our ordinary thinking is. While many philosophers investigate this through rigorous reflection on concepts, we can also use the empirical methods of the social sciences. Second, we can investigate empirically whether our ordinary thinking is correct or illusory. For example, we can check the empirical adequacy of philosophical theories, assessing directly any claims made about how we think, feel, and behave

Understanding the psychology of moral individuals is certainly interesting in its own right, but it also often has direct implications for other areas of ethics, such as metaethics and normative ethics. For instance, determining the role of reason versus sentiment in moral judgment and motivation can shed light on whether moral judgments are cognitive, and perhaps whether morality itself is in some sense objective. Similarly, evaluating moral theories, such as deontology and utilitarianism, often relies on intuitive judgments about what one ought to do in various hypothetical cases. Empirical research can again serve as an additional tool to determine what exactly our intuitions are and which psychological processes generate them, contributing to a rigorous evaluation of the warrant of moral intuitions.

The info is here.

Sunday, February 2, 2020

Empirical Work in Moral Psychology

 Joshua May
Routledge Encyclopedia of Philosophy

How do we form our moral judgments, and how do they influence behavior? What ultimately motivates kind versus malicious action? Moral psychology is the interdisciplinary study of such questions about the mental lives of moral agents, including moral thought, feeling, reasoning, and motivation. While these questions can be studied solely from the armchair or using only empirical tools, researchers in various disciplines, from biology to neuroscience to philosophy, can address them in tandem. Some key topics in this respect revolve around moral cognition and motivation, such as moral responsibility, altruism, the structure of moral motivation, weakness of will, and moral intuitions. Of course there are other important topics as well, including emotions, character, moral development, self-deception, addiction, well-being, and the evolution of moral capacities.

Some of the primary objects of study in moral psychology are the processes driving moral action. For example, we think of ourselves as possessing free will; as being responsible for what we do; as capable of self-control; and as capable of genuine concern for the welfare of others. Such claims can be tested by empirical methods to some extent in at least two ways. First, we can determine what in fact our ordinary thinking is. While many philosophers investigate this through rigorous reflection on concepts, we can also use the empirical methods of the social sciences. Second, we can investigate empirically whether our ordinary thinking is correct or illusory. For example, we can check the empirical adequacy of philosophical theories, assessing directly any claims made about how we think, feel, and behave.

Understanding the psychology of moral individuals is certainly interesting in its own right, but it also often has direct implications for other areas of ethics, such as metaethics and normative ethics. For instance, determining the role of reason versus sentiment in moral judgment and motivation can shed light on whether moral judgments are cognitive, and perhaps whether morality itself is in some sense objective. Similarly, evaluating moral theories, such as deontology and utilitarianism, often relies on intuitive judgments about what one ought to do in various hypothetical cases. Empirical research can again serve as a tool to determine what exactly our intuitions are and which psychological processes generate them, contributing to a rigorous evaluation of the warrant of moral intuitions.

The paper can be downloaded here.

Monday, January 27, 2020

The Character of Causation: Investigating the Impact of Character, Knowledge, and Desire on Causal Attributions

Justin Sytsma
(2019) Preprint

Abstract

There is a growing consensus that norms matter for ordinary causal attributions. This has important implications for philosophical debates over actual causation. Many hold that theories of actual causation should coincide with ordinary causal attributions, yet those attributions often diverge from the theories when norms are involved. There remains substantive debate about why norms matter for causal attributions, however. In this paper, I consider two competing explanations—Alicke’s bias view, which holds that the impact of norms reflects systematic error (suggesting that ordinary causal attributions should be ignored in the philosophical debates), and our responsibility view, which holds that the impact of norms reflects the appropriate application of the ordinary concept of causation (suggesting that philosophical accounts are not analyzing the ordinary concept). I investigate one key difference between these views: the bias view, but not the responsibility view, predicts that “peripheral features” of the agents in causal scenarios—features that are irrelevant to appropriately assessing responsibility for an outcome, such as general character—will also impact ordinary causal attributions. These competing predictions are tested for two different types of scenarios. I find that information about an agent’s character does not impact causal attributions on its own. Rather, when character shows an effect it works through inferences to relevant features of the agent. In one scenario this involves inferences to the agent’s knowledge of the likely result of her action and her desire to bring about that result, with information about knowledge and desire each showing an independent effect on causal attributions.

From the Conclusion:

Alicke’s bias view holds that not only do features of the agent’s mental states matter, such as her knowledge and desires concerning the norm and the outcome, but also peripheral features of the agent whose impact could only reasonably be explained in terms of bias. In contrast, our responsibility view holds that the impact of norms does not reflect bias, but rather that ordinary causal attributions issue from the appropriate application of a concept with a normative component. As such, we predict that while judgments about the agent’s mental states that are relevant to adjudicating responsibility will matter, peripheral features of the agent will only matter insofar as they warrant an inference to other features of the agent that are relevant.

 In line with the responsibility view and against the bias view, the results of the studies presented in this paper suggest that information relevant to assessing an agent’s character matters but only when it warrants an inference to a non-peripheral feature, such as the agent’s negligence in the situation or her knowledge and desire with regard to the outcome. Further, the results indicate that information about an agent’s knowledge and desire both impact ordinary causal attributions in the scenario tested. This raises an important methodological issue for empirical work on ordinary causal attributions: researchers need to carefully consider and control for the inferences that participants might draw concerning the agents’ mental states and motivations.

The research is here.

Saturday, December 7, 2019

Why do so many Americans hate the welfare state?

Elizabeth Anderson in her office at the University of Michigan: ‘There is a profound suspicion of anyone who is poor, and a consequent raising to the highest priority imposing incredibly humiliating, harsh conditions on access to welfare benefits.’ Photograph: © John D and Catherine T MacArthur Foundation – used with permissionJoe Humphries
irishtimes.com
Originally posted October 24, 2019

Interview with Elizabeth Anderson

Here is an excerpt:

Many ethical problems today are presented as matters of individual rather than collective responsibility. Instead of looking at structural injustices, for example, people are told to recycle more to save the environment, or to manage their workload better to avoid exploitation. Where does this bias come from?

“One way to think about it is this is another bizarre legacy of Calvinist thought. It’s really deep in Protestantism that each individual is responsible for their own salvation.

“It’s really an anti-Catholic thing, right? The Catholics have this giant institution that’s going to help people; and Protestantism says, no, no, no, it’s totally you and your conscience, or your faith.

“That individualism – the idea that I’ve got to save myself – got secularised over time. And it is deep, much deeper in America than in Europe – not only because there are way more Catholics in Europe who never bought into this ideology – but also in Europe due to the experience of the two World Wars they realised they are all in the boat together and they better work together or else all is lost.

“America was never under existential threat. So you didn’t have that same sense of the absolute necessity for individual survival that we come together as a nation. I think those experiences are really profound and helped to propel the welfare state across Europe post World War II.”

You’re well known for promoting the idea of relational equality. Tell us a bit about it.

“For a few decades now I’ve been advancing the idea that the fundamental aim of egalitarianism is to establish relations of equality: What are the social relations with the people around us? And that aims to take our focus away from just how much money is in my pocket.

“People do not exist for the sake of money. Wealth exists to enhance your life and not the other way around. We should be focusing on what are we doing to each other in our obsession with maximising profits. How are workers being treated? How are consumers being treated? How is the environment being treated?”

The info is here.

Friday, November 29, 2019

Drivers are blamed more than their automated cars when both make mistakes

Image result for Drivers are blamed more than their automated cars when both make mistakesEdmond Awad and others
Nature Human Behaviour (2019)
Published: 28 October 2019


Abstract

When an automated car harms someone, who is blamed by those who hear about it? Here we asked human participants to consider hypothetical cases in which a pedestrian was killed by a car operated under shared control of a primary and a secondary driver and to indicate how blame should be allocated. We find that when only one driver makes an error, that driver is blamed more regardless of whether that driver is a machine or a human. However, when both drivers make errors in cases of human–machine shared-control vehicles, the blame attributed to the machine is reduced. This finding portends a public under-reaction to the malfunctioning artificial intelligence components of automated cars and therefore has a direct policy implication: allowing the de facto standards for shared-control vehicles to be established in courts by the jury system could fail to properly regulate the safety of those vehicles; instead, a top-down scheme (through federal laws) may be called for.

The research is here.

Tuesday, November 19, 2019

Moral Responsibility

Talbert, Matthew
The Stanford Encyclopedia of Philosophy 
(Winter 2019 Edition), Edward N. Zalta (ed.)

Making judgments about whether a person is morally responsible for her behavior, and holding others and ourselves responsible for actions and the consequences of actions, is a fundamental and familiar part of our moral practices and our interpersonal relationships.

The judgment that a person is morally responsible for her behavior involves—at least to a first approximation—attributing certain powers and capacities to that person, and viewing her behavior as arising (in the right way) from the fact that the person has, and has exercised, these powers and capacities. Whatever the correct account of the powers and capacities at issue (and canvassing different accounts is the task of this entry), their possession qualifies an agent as morally responsible in a general sense: that is, as one who may be morally responsible for particular exercises of agency. Normal adult human beings may possess the powers and capacities in question, and non-human animals, very young children, and those suffering from severe developmental disabilities or dementia (to give a few examples) are generally taken to lack them.

To hold someone responsible involves—again, to a first approximation—responding to that person in ways that are made appropriate by the judgment that she is morally responsible. These responses often constitute instances of moral praise or moral blame (though there may be reason to allow for morally responsible behavior that is neither praiseworthy nor blameworthy: see McKenna 2012: 16–17 and M. Zimmerman 1988: 61–62). Blame is a response that may follow on the judgment that a person is morally responsible for behavior that is wrong or bad, and praise is a response that may follow on the judgment that a person is morally responsible for behavior that is right or good.

The information is here.

Monday, October 21, 2019

An ethicist weighs in on our moral failure to act on climate change

Monique Deveaux
The Conversation
Originally published September 26, 2019

Here is an excerpt:

This call to collective moral and political responsibility is exactly right. As individuals, we can all be held accountable for helping to stop the undeniable environmental harms around us and the catastrophic threat posed by rising levels of CO2 and other greenhouse gases. Those of us with a degree of privilege and influence have an even greater responsibility to assist and advocate on behalf of those most vulnerable to the effects of global warming.

This group includes children everywhere whose futures are uncertain at best, terrifying at worst. It also includes those who are already suffering from severe weather events and rising water levels caused by global warming, and communities dispossessed by fossil fuel extraction. Indigenous peoples around the globe whose lands and water systems are being confiscated and polluted in the search for ever more sources of oil, gas and coal are owed our support and assistance. So are marginalized communities displaced by mountaintop removal and destructive dam energy projects, climate refugees and many others.

The message of climate activists is that we can't fulfill our responsibilities simply by making green choices as consumers or expressing support for their cause. The late American political philosopher Iris Young thought that we could only discharge our "political responsibility for injustice," as she put it, through collective political action.

The interests of the powerful, she warned, conflict with the political responsibility to take actions that challenge the status quo—but which are necessary to reverse injustices.

As the striking school children and older climate activists everywhere have repeatedly pointed out, political leaders have so far failed to enact the carbon emissions reduction policies that are so desperately needed. Despite UN Secretary General António Guterres' sombre words of warning at the Climate Action Summit, the UN is largely powerless in the face of governments that refuse to enact meaningful carbon-reducing policies, such as China and the U.S.

The info is here.

Tuesday, October 8, 2019

Greta Thunberg To U.S.: 'You Have A Moral Responsibility' On Climate Change

Bill Chappell and Ailsa Chang
NPR.org
Originally published September 13, 2019

Greta Thunberg led a protest at the White House on Friday. But she wasn't looking to go inside — "I don't want to meet with people who don't accept the science," she says.

The young Swedish activist joined a large crowd of protesters who had gathered outside, calling for immediate action to help the environment and reverse an alarming warming trend in average global temperatures.

She says her message for President Trump is the same thing she tells other politicians: Listen to science, and take responsibility.

Thunberg, 16, arrived in the U.S. last week after sailing across the Atlantic to avoid the carbon emissions from jet travel. She plans to spend nearly a week in Washington, D.C. — but she doesn't plan to meet with anyone from the Trump administration during that time.

"I haven't been invited to do that yet. And honestly I don't want to do that," Thunberg tells NPR's Ailsa Chang. If people in the White House who reject climate change want to change their minds, she says, they should rely on scientists and professionals to do that.

But Thunberg also believes the U.S. has an "incredibly important" role to play in fighting climate change.

"You are such a big country," she says. "In Sweden, when we demand politicians to do something, they say, 'It doesn't matter what we do — because just look at the U.S.'

The info is here.

Monday, July 22, 2019

Thomas Fisher on The Ethics of Architecture and Other Contradictions

Michael Crosbie
www.archdaily.com
Originally posted June 21, 2019

Here is an excerpt from the interview between Michael Crosbie and Thomas Fisher:

MJC: Most architects don’t give serious consideration to ethics in their design work. Why not?

TF: The revision of AIA’s Code of Ethics requiring members to discuss the environmental impacts of a project with the client really gets at that. In the past, architects have been wary to have such discussions because it questions the power of the client to do whatever they want because they have the means to do so. Architects have been designing for people with power and money for a very long time. It’s easier to talk about aesthetics, function, or the pragmatics of design because it doesn’t question a client’s power.

MJC: “The pursuit of happiness” is a very strong idea in American culture. How do architects balance serving clients—in their “pursuit of happiness” through architecture—with the greater good of the community?

TF: In ethics, “the pursuit of happiness” is often misunderstood. Utilitarian ethics states that you strive to make the greatest number of people happy; the 18th-century philosopher Jeremy Bentham promoted “the greatest good for the greatest number.” But ethics is also about understanding how others view the world, and how our actions affect the lives and welfare of others. The role of professionals is to look after the greater good. Licensure is a social contract in which, in exchange for a monopoly in providing professional services, the professional is responsible for the larger picture. Designing to satisfy someone’s hedonistic “pursuit of happiness” without regard to that bigger picture is unethical behavior for an architect. It violates the social contract behind licensure. I think an architect should lose his or her license for an action like that. Such an action might not be illegal, but it’s unethical. Ethics is really about our day-to-day interactions with people in the realm of space, public and private.

The interview is here.

Monday, June 24, 2019

Not so Motivated After All? Three Replication Attempts and a Theoretical Challenge to a Morally-Motivated Belief in Free Will

Andrew E. Monroe and Dominic Ysidron
Preprint

Abstract

AbstractFree will is often appraised as a necessary input to for holding others morally or legally responsible for misdeeds. Recently, however,Clark and colleagues (2014), argued for the opposite causal relationship. They assert that moral judgments and the desire to punish motivate people’s belief in free will. In three experiments—two exact replications (Studies 1 & 2b) and one close replication(Study 2a)we seek to replicate these findings. Additionally, in a novel experiment (Study 3) we test a theoretical challenge derived from attribution theory, which suggests that immoral behaviors do not uniquely influence free will judgments. Instead, our non-violation model argues that norm deviations, of any kind—good, bad, or strange—cause people to attribute more free will to agents, and attributions of free will are explained via desire inferences.Across replication experiments we found no evidence for the original claim that witnessing immoral behavior causes people to increase their belief in free will, though we did replicate the finding that people attribute more free will to agents who behave immorally compared to a neutral control (Studies 2a & 3). Finally, our novel experiment demonstrated broad support for our norm-violation account, suggesting that people’s willingness to attribute free will to others is malleable, but not because people are motivated to blame.Instead, this experiment shows that attributions of free will are best explained by people’s expectations for norm adherence, and when these expectations are violated people infer that an agent expressed their free will to do so.

From the Discussion Section:

Together these findings argue for a non-moral explanation for free will judgments with norm-violation as the key driver. This account explains people’s tendency to attribute more free will to behaving badly agents because people generally expect others to follow moral norms, and when they don’t, people believe that there must have been a strong desire to perform the behavior. In addition, a norm-violation account is able to explain why people attribute more free will to agents behaving in odd or morally positive ways. Any deviation from what is expected causes people to attribute more desire and choice (i.e., free will)to that agent.Thus our findings suggest that people’s willingness to ascribe free will to others is indeed malleable, but considerations of free will are being driven by basic social cognitive representations of norms, expectations, and desire. Moreover, these data indicate that when people endorse free will for themselves or for others, they are not making claims about broad metaphysical freedom. Instead, if desires and norm-constraints are what affect ascriptions of free will, this suggests that what it means to have (or believe) in free willis to be rational (i.e., making choices informed by desires and preferences) and able to overcome constraints.

A preprint can be found here.

Motivated free will belief: The theory, new (preregistered) studies, and three meta-analyses

Clark, C. J., Winegard, B. M., & Shariff, A. F. (2019).
Manuscript submitted for publication.

Abstract

Do desires to punish lead people to attribute more free will to individual actors (motivated free will attributions) and to stronger beliefs in human free will (motivated free will beliefs) as suggested by prior research? Results of 14 new (7 preregistered) studies (n=4,014) demonstrated consistent support for both of these. These findings consistently replicated in studies (k=8) in which behaviors meant to elicit desires to punish were rated as equally or less counternormative than behaviors in control conditions. Thus, greater perceived counternormativity cannot account for these effects. Additionally, three meta-analyses of the existing data (including eight vignette types and eight free will judgment types) found support for motivated free will attributions (k=22; n=7,619; r=.25, p<.001) and beliefs (k=27; n=8,100; r=.13, p<.001), which remained robust after removing all potential moral responsibility confounds (k=26; n=7,953; r=.12, p<.001). The size of these effects varied by vignette type and free will belief measurement. For example, presenting the FAD+ free will belief subscale mixed among three other subscales (as in Monroe and Ysidron’s [2019] failed replications) produced a smaller average effect size (r=.04) than shorter and more immediate measures (rs=.09-.28). Also, studies with neutral control conditions produced larger effects (Attributions: r=.30; Beliefs: rs=.14-.16) than those with control conditions involving bad actions (Attributions: r=.05; Beliefs: rs=.04-.06). Removing these two kinds of studies from the meta-analyses produced larger average effect sizes (Attributions: r=.28; Beliefs: rs=.17-.18). We discuss the relevance of these findings for past and future research and the significance of these findings for human responsibility.

From the Discussion Section:

We suspect that motivated free will beliefs have become more common as society has become more humane and more concerned about proportionate punishment. Many people now assiduously reflect upon their own society’s punitive practices and separate those who deserve to be punished from those who are incapable of being fully responsible for their actions. Free will is crucial here because it is often considered a prerequisite for moral responsibility (Nichols & Knobe, 2007; Sarkissian et al., 2010; Shariff et al., 2014). Therefore, when one is motivated to punish another person, one is also motivated to inflate free will beliefs and free will attributions to specific perpetrators as a way to justify punishing the person.

A preprint can be downloaded here.

Tuesday, June 18, 2019

A tech challenge? Fear not, many AI issues boil down to ethics

Peter Montagnon
www.ft.com
Originally posted June 3, 2019

Here is an excerpt:

Ethics are particularly important when technology enters the governance agenda. Machines may be capable of complex calculation but they are so far unable to make qualitative or moral judgments.

Also, the use and manipulation of a massive amount of data creates an information asymmetry. This confers power on those who control it at the potential expense of those who are the subject of it.

Ultimately there must always be human accountability for the decisions that machines originate.

In the corporate world, the board is where accountability resides. No one can escape this. To exercise their responsibilities, directors do not need to be as expert as tech teams. For sure, they need to be familiar with the scope of technology used by their companies, what it can and cannot do, and where the risks and opportunities lie.

For that they may need trustworthy advice from either the chief technology officer or external experts, but the decisions will generally be about what is acceptable and what is not.

The risks may well be of a human rather than a tech kind. With the motor industry, one risk with semi-automated vehicles is that the owners of such cars will think they can do more on autopilot than they can. It seems most of us are bad at reading instructions and will need clear warnings, perhaps to the point where the car may even seem disappointing.

The info is here.


Saturday, June 1, 2019

Does It Matter Whether You or Your Brain Did It?

Uri Maoz, K. R. Sita, J. J. A. van Boxtel, and L. Mudrik
Front. Psychol., 30 April 2019
https://doi.org/10.3389/fpsyg.2019.00950

Abstract

Despite progress in cognitive neuroscience, we are still far from understanding the relations between the brain and the conscious self. We previously suggested that some neuroscientific texts that attempt to clarify these relations may in fact make them more difficult to understand. Such texts—ranging from popular science to high-impact scientific publications—position the brain and the conscious self as two independent, interacting subjects, capable of possessing opposite psychological states. We termed such writing ‘Double Subject Fallacy’ (DSF). We further suggested that such DSF language, besides being conceptually confusing and reflecting dualistic intuitions, might affect people’s conceptions of moral responsibility, lessening the perception of guilt over actions. Here, we empirically investigated this proposition with a series of three experiments (pilot and two preregistered replications). Subjects were presented with moral scenarios where the defendant was either (1) clearly guilty, (2) ambiguous, or (3) clearly innocent while the accompanying neuroscientific evidence about the defendant was presented using DSF or non-DSF language. Subjects were instructed to rate the defendant’s guilt in all experiments. Subjects rated the defendant in the clearly guilty scenario as guiltier than in the two other scenarios and the defendant in the ambiguously described scenario as guiltier than in the innocent scenario, as expected. In Experiment 1 (N = 609), an effect was further found for DSF language in the expected direction: subjects rated the defendant less guilty when the neuroscientific evidence was described using DSF language, across all levels of culpability. However, this effect did not replicate in Experiment 2 (N = 1794), which focused on different moral scenario, nor in Experiment 3 (N = 1810), which was an exact replication of Experiment 1. Bayesian analyses yielded strong evidence against the existence of an effect of DSF language on the perception of guilt. Our results thus challenge the claim that DSF language affects subjects’ moral judgments. They further demonstrate the importance of good scientific practice, including preregistration and—most critically—replication, to avoid reaching erroneous conclusions based on false-positive results.

Friday, May 24, 2019

Holding Robots Responsible: The Elements of Machine Morality

Y. Bingman, A. Waytz, R Alterovitz, and K. Gray
Trends in Cognitive Science

Abstract


As robots become more autonomous, people will see them as more responsible for wrongdoing. Moral psychology suggests that judgments of robot responsibility will hinge on perceived situational awareness, intentionality, and free will—plus anthropomorphism and the robot’s capacity for harm. We also consider questions of robot rights and moral decision-making.

Here is an excerpt:

Philosophy, law, and modern cognitive science all reveal that judgments of human moral responsibility hinge on autonomy. This explains why children, who seem to have less autonomy than adults, are held less responsible for wrongdoing. Autonomy is also likely crucial in judgments of robot moral responsibility. The reason people ponder and debate the ethical implications of drones and self-driving cars (but not tractors or blenders) is because these machines can act autonomously.

Admittedly, today’s robots have limited autonomy, but it is an expressed goal of roboticists to develop fully autonomous robots—machine systems that can act without human input. As robots become more autonomous their potential for moral responsibility will only grow. Even as roboticists create robots with more “objective” autonomy, we note that “subjective” autonomy may be more important: work in cognitive science suggest that autonomy and moral responsibility are more matters of perception than objective truths.

The info can be downloaded here.

Friday, May 10, 2019

An Evolutionary Perspective On Free Will Belief

Cory Clark & Bo Winegard
Science Trends
Originally posted April 9, 2019

Here is an excerpt:

Both scholars and everyday people seem to agree that free will (whatever it is) is a prerequisite for moral responsibility (though note, among philosophers, there are numerous definitions and camps regarding how free will and moral responsibility are linked). This suggests that a crucial function of free will beliefs is the promotion of holding others morally responsible. And research supports this. Specifically, when people are exposed to another’s harmful behavior, they increase their broad beliefs in the human capacity for free action. Thus, believing in free will might facilitate the ability of individuals to punish harmful members of the social group ruthlessly.

But recent research suggests that free will is about more than just punishment. People might seek morally culpable agents not only when desiring to punish, but also when desiring to praise. A series of studies by Clark and colleagues (2018) found that, whereas people generally attributed more free will to morally bad actions than to morally good actions, they attributed more free will to morally good actions than morally neutral ones. Moreover, whereas free will judgments for morally bad actions were primarily driven by affective desires to punish, free will judgments for morally good actions were sensitive to a variety of characteristics of the behavior.