Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Thursday, July 22, 2021

The Possibility of an Ongoing Moral Catastrophe

Williams, E.G. (2015).
Ethic Theory Moral Prac 18, 
971–982 (2015). 
https://doi.org/10.1007/s10677-015-9567-7

Abstract

This article gives two arguments for believing that our society is unknowingly guilty of serious, large-scale wrongdoing. First is an inductive argument: most other societies, in history and in the world today, have been unknowingly guilty of serious wrongdoing, so ours probably is too. Second is a disjunctive argument: there are a large number of distinct ways in which our practices could turn out to be horribly wrong, so even if no particular hypothesized moral mistake strikes us as very likely, the disjunction of all such mistakes should receive significant credence. The article then discusses what our society should do in light of the likelihood that we are doing something seriously wrong: we should regard intellectual progress, of the sort that will allow us to find and correct our moral mistakes as soon as possible, as an urgent moral priority rather than as a mere luxury; and we should also consider it important to save resources and cultivate flexibility, so that when the time comes to change our policies we will be able to do so quickly and smoothly.

Wednesday, July 21, 2021

The Parliamentary Approach to Moral Uncertainty

Toby Newberry & Toby Ord
Future of Humanity Institute
University of Oxford 2021

Abstract

We introduce a novel approach to the problem of decision-making under moral uncertainty, based
on an analogy to a parliament. The appropriate choice under moral uncertainty is the one that
would be reached by a parliament comprised of delegates representing the interests of each moral
theory, who number in proportion to your credence in that theory. We present what we see as the
best specific approach of this kind (based on proportional chances voting), and also show how the
parliamentary approach can be used as a general framework for thinking about moral uncertainty,
where extant approaches to addressing moral uncertainty correspond to parliaments with different
rules and procedures.

Here is an excerpt:

Moral Parliament

Imagine that each moral theory in which you have credence got to send delegates to an internal parliament, where the number of delegates representing each theory was proportional to your credence in that theory. Now imagine that these delegates negotiate with each other, advocating on behalf of their respective moral theories, until eventually the parliament reaches a decision by the delegates voting on the available options. This would provide a novel approach to decision-making under moral uncertainty that may avoid some of the problems that beset the others, and it may even provide a new framework for thinking about moral uncertainty more broadly.

(cut)

Here, we endorse a common-sense approach to the question of scale which has much in common with standard decision-theoretic conventions. The suggestion is that one should convene Moral Parliament for those decision-situations to which it is intuitively appropriate, such as those involving non-trivial moral stakes, where the possible options are relatively well-defined, and so on. Normatively speaking, if Moral Parliament is the right approach to take to moral uncertainty, then it may also be right to apply it to all decision-situations (however this is defined). But practically speaking, this would be very difficult to achieve. This move has essentially the same implications as the approach of sidestepping the question but comes with a positive endorsement of Moral Parliament’s application to ‘the kinds of decision-situations typically described in papers on moral uncertainty’. This is the sense in which the common-sense approach resembles standard decision-theoretic conventions. 

Tuesday, July 20, 2021

Morally Motivated Networked Harassment as Normative Reinforcement

Marwick, A. E. (2021). 
Social Media + Society. 

Abstract

While online harassment is recognized as a significant problem, most scholarship focuses on descriptions of harassment and its effects. We lack explanations of why people engage in online harassment beyond simple bias or dislike. This article puts forth an explanatory model where networked harassment on social media functions as a mechanism to enforce social order. Drawing from examples of networked harassment taken from qualitative interviews with people who have experienced harassment (n = 28) and Trust & Safety workers at social platforms (n = 9), the article builds on Brady, Crockett, and Bavel’s model of moral contagion to explore how moral outrage is used to justify networked harassment on social media. In morally motivated networked harassment, a member of a social network or online community accuses a target of violating their network’s norms, triggering moral outrage. Network members send harassing messages to the target, reinforcing their adherence to the norm and signaling network membership. Frequently, harassment results in the accused self-censoring and thus regulates speech on social media. Neither platforms nor legal regulations protect against this form of harassment. This model explains why people participate in networked harassment and suggests possible interventions to decrease its prevalence.

From the Conclusion

Ultimately, conceptualizing harassment as morally motivated and understanding it as a technique of norm reinforcement explains why people participate in it, a necessary step to decreasing it. This model may open creative solutions to harassment and content moderation. MMNH also recognizes that harassment, while more endemic to minorized communities, may be experienced by people from a wide variety of identities and political commitments, suggesting many possibilities for future research. Current technical and legal models of harassment do not protect against networked harassment; by providing a new model, I hope to contribute to lessening its prevalence.

Monday, July 19, 2021

Non-consensual personified sexbots: an intrinsic wrong

Lancaster, K. 
Ethics Inf Technol (2021). 

Abstract

Humanoid robots used for sexual purposes (sexbots) are beginning to look increasingly lifelike. It is possible for a user to have a bespoke sexbot created which matches their exact requirements in skin pigmentation, hair and eye colour, body shape, and genital design. This means that it is possible—and increasingly easy—for a sexbot to be created which bears a very high degree of resemblance to a particular person. There is a small but steadily increasing literature exploring some of the ethical issues surrounding sexbots, however sexbots made to look like particular people is something which, as yet, has not been philosophically addressed in the literature. In this essay I argue that creating a lifelike sexbot to represent and resemble someone is an act of sexual objectification which morally requires consent, and that doing so without the person’s consent is intrinsically wrong. I consider two sexbot creators: Roy and Fred. Roy creates a sexbot of Katie with her consent, and Fred creates a sexbot of Jane without her consent. I draw on the work of Alan Goldman, Rae Langton, and Martha Nussbaum in particular to demonstrate that creating a sexbot of a particular person requires consent if it is to be intrinsically permissible.

From the Conclusion

Although sexbots may bring about a multitude of negative consequences for individuals and society, I have set these aside in order to focus on the intrinsically wrong act of creating a personified sexbot without the consent of the human subject. I have maintained that creating a personified sexbot is an act of sexual objectification directed towards that particular person which may or may not be permissible, depending on whether the human subject’s consent was obtained. Using Nussbaum’s Kantian-inspired argument, I have shown that non-consensually sexbotifying a human subject involves using them merely as a means, which is intrinsically wrong. Meanwhile, in a sexbotification case where the human subject’s prior consent is obtained, she has not been intrinsically wronged by the creation of the sexbot because she has not been used merely as a means to an end. With personified sexbots, consent of the human subject is a moral prerequisite, and is transformative when obtained. In other words, in cases of non-consensual sexbotification, the lack of consent is the wrong-making feature of the act. Even if it were the case that creating any sexbot is intrinsically wrong because it objectifies women qua women, it is still right to maintain that sexbotifying a woman without her consent is an additional intrinsic wrong.

Sunday, July 18, 2021

‘They’re Not True Humans’: Beliefs About Moral Character Drive Categorical Denials of Humanity

Phillips, B. (2021, May 29). 

Abstract

In examining the cognitive processes that drive dehumanization, laboratory-based research has focused on non-categorical denials of humanity. Here, we examine the conditions under which people are willing to categorically deny that someone else is human. In doing so, we argue that people harbor a dual character concept of humanity. Research has found that dual character concepts have two independent sets of criteria for their application, one of which is normative. Across four experiments, we found evidence that people deploy one criterion according to which being human is a matter of being a Homo sapiens; as well as a normative criterion according to which being human is a matter of possessing a deep-seated commitment to do the morally right thing. Importantly, we found that people are willing to affirm that someone is human in the species sense, but deny that they are human in the normative sense, and vice versa. These findings suggest that categorical denials of humanity are not confined to extreme cases outside the laboratory. They also suggest a solution to “the paradox of dehumanization.”

(cut)

6.2.The paradox of dehumanization 

The findings reported here also suggest a solution to the paradox of dehumanization. Recall that in paradigmatic cases of dehumanization, such as the Holocaust, the perpetrators tend to attribute certain uniquely human traits to their victims. For example, the Nazis frequently characterized Jewish people as criminals and traitors. They also treated them as moral agents, and subjected them to severe forms of punishment and humiliation (see Gutman and Berenbaum, 1998). Criminality, treachery, and moral agency are not capacities that we tend to attribute to nonhuman animals.  Thus, can we really say that the Nazis thought of their victims as nonhuman? In responding to this paradox, some theorists have suggested that the perpetrators in these paradigmatic cases do not, in fact, think of their victims as nonhuman(see Appiah, 2008; Bloom, 2017; Manne, 2016, 2018, chapter 5; Over, 2020; Rai et al., 2017).Other theorists have suggested that the perpetrators harbor inconsistent representations of their victims, simultaneously thinking of them as both human and subhuman (Smith, 2016, 2020).Our findings suggest a third possibility: namely, that the perpetrators harbor a dual character concept of humanity, categorizing their victims as human in one sense, but denying that they are human in another sense. For example, it is true that theNazis attributed certain uniquely human traits to their victims, such as criminality. However, when categorizing their victims as evil criminals, the Nazis may have been thinking of them as nonhuman in the normative sense, while recognizing them as human in the species sense (for a relevant discussion, see Steizinger, 2018). This squares away with the fact that when the Nazis likened Jewish people to certain animals, such as rats, this often took on a moralizing tone. For example, in an antisemitic book entitled The Eternal Jew (Nachfolger, 1937), Jewish neighborhoods in Berlin were described as “breeding grounds of criminal and political vermin.” Similarly, when the Nazis referred toJews as “subhumans,” they often characterized them as bad moral agents. For example, as was mentioned above, Goebbels described Bolshevism as “the declaration of war by Jewish-led international subhumans against culture itself.”Similarly, in one 1943 Nazi pamphlet, Marxist values are described as appealing to subhumans, while liberalist values are described as “allowing the triumph of subhumans” (Anonymous, 1943, chapter 1).

Saturday, July 17, 2021

Bad machines corrupt good morals

Köbis, N., Bonnefon, J F. & Rahwan, I. 
Nat Hum Behav 5, 679–685 (2021). 
https://doi.org/10.1038/s41562-021-01128-2

Abstract

As machines powered by artificial intelligence (AI) influence humans’ behaviour in ways that are both like and unlike the ways humans influence each other, worry emerges about the corrupting power of AI agents. To estimate the empirical validity of these fears, we review the available evidence from behavioural science, human–computer interaction and AI research. We propose four main social roles through which both humans and machines can influence ethical behaviour. These are: role model, advisor, partner and delegate. When AI agents become influencers (role models or advisors), their corrupting power may not exceed the corrupting power of humans (yet). However, AI agents acting as enablers of unethical behaviour (partners or delegates) have many characteristics that may let people reap unethical benefits while feeling good about themselves, a potentially perilous interaction. On the basis of these insights, we outline a research agenda to gain behavioural insights for better AI oversight.

From the end of the article

Another policy-relevant research question is how to integrate awareness for the corrupting force of AI tools into the innovation process. New AI tools hit the market on a daily basis. The current approach of ‘innovate first, ask for forgiveness later’ has caused considerable backlash and even demands for banning AI technology such as facial recognition. As a consequence, ethical considerations must enter the innovation and publication process of AI developments. Current efforts to develop ethical labels for responsible and crowdsourcing citizens’ preferences about ethical are mostly concerned about the direct unethical consequences of AI behaviour and not its influence on the ethical conduct of the humans who interact with and through it. A thorough experimental approach to responsible AI will need to expand concerns about direct AI-induced harm to concerns about how bad machines can corrupt good morals.

Friday, July 16, 2021

“False positive” emotions, responsibility, and moral character

Anderson, R.A., et al.
Cognition
Volume 214, September 2021, 104770

Abstract

People often feel guilt for accidents—negative events that they did not intend or have any control over. Why might this be the case? Are there reputational benefits to doing so? Across six studies, we find support for the hypothesis that observers expect “false positive” emotions from agents during a moral encounter – emotions that are not normatively appropriate for the situation but still trigger in response to that situation. For example, if a person accidentally spills coffee on someone, most normative accounts of blame would hold that the person is not blameworthy, as the spill was accidental. Self-blame (and the guilt that accompanies it) would thus be an inappropriate response. However, in Studies 1–2 we find that observers rate an agent who feels guilt, compared to an agent who feels no guilt, as a better person, as less blameworthy for the accident, and as less likely to commit moral offenses. These attributions of moral character extend to other moral emotions like gratitude, but not to nonmoral emotions like fear, and are not driven by perceived differences in overall emotionality (Study 3). In Study 4, we demonstrate that agents who feel extremely high levels of inappropriate (false positive) guilt (e.g., agents who experience guilt but are not at all causally linked to the accident) are not perceived as having a better moral character, suggesting that merely feeling guilty is not sufficient to receive a boost in judgments of character. In Study 5, using a trust game design, we find that observers are more willing to trust others who experience false positive guilt compared to those who do not. In Study 6, we find that false positive experiences of guilt may actually be a reliable predictor of underlying moral character: self-reported predicted guilt in response to accidents negatively correlates with higher scores on a psychopathy scale.

From the General Discussion

It seems reasonable to think that there would be some benefit to communicating these moral emotions as a signal of character, and to being able to glean information about the character of others from observations of their emotional responses. If a propensity to feel guilt makes it more likely that a person is cooperative and trustworthy, observers would need to discriminate between people who are and are not prone to guilt. Guilt could therefore serve as an effective regulator of moral behavior in others in its role as a reliable signal of good character.  This account is consistent with theoretical accounts of emotional expressions more generally, either in the face, voice, or body, as a route by which observers make inferences about a person’s underlying dispositions (Frank, 1988). Our results suggest that false positive emotional responses specifically may provide an additional, and apparently informative, source of evidence for one’s propensity toward moral emotions and moral behavior.

Thursday, July 15, 2021

Overconfidence in news judgments is associated with false news susceptibility

B. A. Lyons, et al.
PNAS, Jun 2021, 118 (23) e2019527118
DOI: 10.1073/pnas.2019527118

Abstract

We examine the role of overconfidence in news judgment using two large nationally representative survey samples. First, we show that three in four Americans overestimate their relative ability to distinguish between legitimate and false news headlines; respondents place themselves 22 percentiles higher than warranted on average. This overconfidence is, in turn, correlated with consequential differences in real-world beliefs and behavior. We show that overconfident individuals are more likely to visit untrustworthy websites in behavioral data; to fail to successfully distinguish between true and false claims about current events in survey questions; and to report greater willingness to like or share false content on social media, especially when it is politically congenial. In all, these results paint a worrying picture: The individuals who are least equipped to identify false news content are also the least aware of their own limitations and, therefore, more susceptible to believing it and spreading it further.

Significance

Although Americans believe the confusion caused by false news is extensive, relatively few indicate having seen or shared it—a discrepancy suggesting that members of the public may not only have a hard time identifying false news but fail to recognize their own deficiencies at doing so. If people incorrectly see themselves as highly skilled at identifying false news, they may unwittingly participate in its circulation. In this large-scale study, we show that not only is overconfidence extensive, but it is also linked to both self-reported and behavioral measures of false news website visits, engagement, and belief. Our results suggest that overconfidence may be a crucial factor for explaining how false and low-quality information spreads via social media.

Wednesday, July 14, 2021

Popularity is linked to neural coordination: Neural evidence for an Anna Karenina principle in social networks

Baek, E. C.,  et al. (2021)
https://doi.org/10.31234/osf.io/6fj2p

Abstract

People differ in how they attend to, interpret, and respond to their surroundings. Convergent processing of the world may be one factor that contributes to social connections between individuals. We used neuroimaging and network analysis to investigate whether the most central individuals in their communities (as measured by in-degree centrality, a notion of popularity) process the world in a particularly normative way. More central individuals had exceptionally similar neural responses to their peers and especially to each other in brain regions associated with high-level interpretations and social cognition (e.g., in the default-mode network), whereas less-central individuals exhibited more idiosyncratic responses. Self-reported enjoyment of and interest in stimuli followed a similar pattern, but accounting for these data did not change our main results. These findings suggest an “Anna Karenina principle” in social networks: Highly-central individuals process the world in exceptionally similar ways, whereas less-central individuals process the world in idiosyncratic ways.

Discussion

What factors distinguish highly-central individuals in social networks? Our results are consistent with the notion that popular individuals (who are central in their social networks) process the world around them in normative ways, whereas unpopular individuals process the world around them idiosyncratically. Popular individuals exhibited greater mean neural similarity with their peers than unpopular individuals in several regions of the brain, including ones in which similar neural responding has been associated with shared higher-level interpretations of events and social cognition (e.g., regions of the default mode network) while viewing dynamic, naturalistic stimuli. Our results indicate that the relationship between popularity and neural similarity follows anAnna Karenina principle. Specifically, we observed that popular individuals were very similar to each other in their neural responses, whereas unpopular individuals were dissimilar both to each other and to their peers’ normative way of processing the world.  Our findings suggest that highly-central people process and respond to the world around them in a manner that allows them to relate to and connect with many of their peers and that less-central people exhibit idiosyncrasies that may result in greater difficulty in relating to others.