Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Friday, February 19, 2021

The Cognitive Science of Fake News

Pennycook, G., & Rand, D. G. 
(2020, November 18). 

Abstract

We synthesize a burgeoning literature investigating why people believe and share “fake news” and other misinformation online. Surprisingly, the evidence contradicts a common narrative whereby partisanship and politically motivated reasoning explain failures to discern truth from falsehood. Instead, poor truth discernment is linked to a lack of careful reasoning and relevant knowledge, and to the use of familiarity and other heuristics. Furthermore, there is a substantial disconnect between what people believe and what they will share on social media. This dissociation is largely driven by inattention, rather than purposeful sharing of misinformation. As a result, effective interventions can nudge social media users to think about accuracy, and can leverage crowdsourced veracity ratings to improve social media ranking algorithms.

From the Discussion

Indeed, recent research shows that a simple accuracy nudge intervention –specifically, having participants rate the accuracy of a single politically neutral headline (ostensibly as part of a pretest) prior to making judgments about social media sharing –improves the extent to which people discern between true and false news content when deciding what to share online in survey experiments. This approach has also been successfully deployed in a large-scale field experiment on Twitter, in which messages asking users to rate the accuracy of a random headline were sent to thousands of accounts who recently shared links to misinformation sites. This subtle nudge significantly increased the quality of the content they subsequently shared; see Figure3B. Furthermore, survey experiments have shown that asking participants to explain how they know if a headline is true of false before sharing it increases sharing discernment, and having participants rate accuracy at the time of encoding protects against familiarity effects."

Thursday, February 18, 2021

Intuitive Expertise in Moral Judgements.

Wiegmann, A., & Horvath, J. 
(2020, December 22). 

Abstract

According to the ‘expertise defence’, experimental findings which suggest that intuitive judgements about hypothetical cases are influenced by philosophically irrelevant factors do not undermine their evidential use in (moral) philosophy. This defence assumes that philosophical experts are unlikely to be influenced by irrelevant factors. We discuss relevant findings from experimental metaphilosophy that largely tell against this assumption. To advance the debate, we present the most comprehensive experimental study of intuitive expertise in ethics to date, which tests five well-known biases of judgement and decision-making among expert ethicists and laypeople. We found that even expert ethicists are affected by some of these biases, but also that they enjoy a slight advantage over laypeople in some cases. We discuss the implications of these results for the expertise defence, and conclude that they still do not support the defence as it is typically presented in (moral) philosophy.

Conclusion

We first considered the experimental restrictionist challenge to intuitions about cases, with a special focus on moral philosophy, and then introduced the expertise defence as the most popular reply. The expertise defence makes the empirically testable assumption that the case intuitions of expert philosophers are significantly less influenced by philosophically irrelevant factors than those of laypeople.  The upshot of our discussion of relevant findings from experimental metaphilosophy was twofold: first, extant findings largely tell against the expertise defence, and second, the number of published studies and investigated biases is still fairly small. To advance the debate about the expertise defencein moral philosophy, we thus tested five well-known biases of judgement and decision-making among expert ethicists and laypeople. Averaged across all biases and scenarios, the intuitive judgements of both experts and laypeople were clearly susceptible to bias. However, moral philosophers were also less biased in two of the five cases(Focus and Prospect), although we found no significant expert-lay differences in the remaining three cases.

In comparison to previous findings (for example Schwitzgebel and Cushman [2012, 2015]; Wiegmann et al. [2020]), our results appear to be relatively good news for the expertise defence, because they suggest that moral philosophers are less influenced by some morally irrelevant factors, such as a simple saving/killing framing. On the other hand, our study does not support the very general armchair versions of the expertise defence that one often finds in metaphilosophy, which try to reassure(moral) philosophers that they need not worry about the influence of philosophically irrelevant factors.At best, however, we need not worry about just a few cases and a few human biases—and even that modest hypothesis can only be upheld on the basis of sufficient empirical research.

Wednesday, February 17, 2021

Distributed Cognition and Distributed Morality: Agency, Artifacts and Systems

Heersmink, R. 
Sci Eng Ethics 23, 431–448 (2017). 
https://doi.org/10.1007/s11948-016-9802-1

Abstract

There are various philosophical approaches and theories describing the intimate relation people have to artifacts. In this paper, I explore the relation between two such theories, namely distributed cognition and distributed morality theory. I point out a number of similarities and differences in these views regarding the ontological status they attribute to artifacts and the larger systems they are part of. Having evaluated and compared these views, I continue by focussing on the way cognitive artifacts are used in moral practice. I specifically conceptualise how such artifacts (a) scaffold and extend moral reasoning and decision-making processes, (b) have a certain moral status which is contingent on their cognitive status, and (c) whether responsibility can be attributed to distributed systems. This paper is primarily written for those interested in the intersection of cognitive and moral theory as it relates to artifacts, but also for those independently interested in philosophical debates in extended and distributed cognition and ethics of (cognitive) technology.

Discussion

Both Floridi and Verbeek argue that moral actions, either positive or negative, can be the result of interactions between humans and technology, giving artifacts a much more prominent role in ethical theory than most philosophers have. They both develop a non-anthropocentric systems approach to morality. Floridi focuses on large-scale ‘‘multiagent systems’’, whereas Verbeek focuses on small-scale ‘‘human–technology associations’’. But both attribute morality or moral agency to systems comprising of humans and technological artifacts. On their views, moral agency is thus a system property and not found exclusively in human agents. Does this mean that the artifacts and software programs involved in the process have moral agency? Neither of them attribute moral agency to the artifactual components of the larger system. It is not inconsistent to say that the human-artifact system has moral agency without saying that its artifactual components have moral agency.  Systems often have different properties than their components. The difference between Floridi and Verbeek’s approach roughly mirrors the difference between distributed and extended cognition, in that Floridi and distributed cognition theory focus on large-scale systems without central controllers, whereas Verbeek and extended cognition theory focus on small-scale systems in which agents interact with and control an informational artifact. In Floridi’s example, the technology seems semi-autonomous: the software and computer systems automatically do what they are designed to do. Presumably, the money is automatically transferred to Oxfam, implying that technology is a mere cog in a larger socio-technical system that realises positive moral outcomes. There seems to be no central controller in this system: it is therefore difficult to see it as an extended agency whose intentions are being realised.

Tuesday, February 16, 2021

Strategic Regulation of Empathy

Weisz, E., & Cikara, M. 
(2020, October 9).

Abstract

Empathy is an integral part of socio-emotional well-being, yet recent research has highlighted some of its downsides. Here we examine literature that establishes when, how much, and what aspects of empathy promote specific outcomes. After reviewing a theoretical framework which characterizes empathy as a suite of separable components, we examine evidence showing how dissociations of these components affect important socio-emotional outcomes and describe emerging evidence suggesting that these components can be independently and deliberately modulated. Finally, we advocate for a new approach to a multi-component view of empathy which accounts for the interrelations among components. This perspective advances scientific conceptualization of empathy and offers suggestions for tailoring empathy to help people realize their social, emotional, and occupational goals.

From Concluding Remarks

Early research on empathy regarded it as a monolithic construct. This characterization ultimately gave rise to a second wave of empathy-related research, which explicitly examined dissociations among empathy-related components.Subsequently, researchers noticed that individual components held different predictive power over key outcomes such as helping and occupational burnout. As described above, however, there are many instances in which these components track together in the real world, suggesting that although they can dissociate, they often operate in tandem.

Because empathy-related components rely on separable neural systems, the field of social neuroscience has already made significant progress toward the goal of characterizing instances when components do (or do not) track together.  For example, although affective and cognitive channels can independently contribute to judgments of others emotional states, they also operate in synchrony during more naturalistic socio-emotional tasks.  However, far more behavioral research is needed to characterize the co-occurrence of components in people’s everyday social interactions.  Because people differ in their tendencies to engage distinct components of empathy, a better understanding of the separability and interrelations of these components in real-world social scenarios can help tailor empathy-training programs to promote desirable outcomes.  Empathy-training efforts are on average effective (Hedges’ g = 0.51) but generally intervene on empathy as a whole (rather than specific components). 

Monday, February 15, 2021

Response time modelling reveals evidence for multiple, distinct sources of moral decision caution

Andrejević, M., et al. 
(2020, November 13). 

Abstract

People are often cautious in delivering moral judgments of others’ behaviours, as falsely accusing others of wrongdoing can be costly for social relationships. Caution might further be present when making judgements in information-dynamic environments, as contextual updates can change our minds. This study investigated the processes with which moral valence and context expectancy drive caution in moral judgements. Across two experiments, participants (N = 122) made moral judgements of others’ sharing actions. Prior to judging, participants were informed whether contextual information regarding the deservingness of the recipient would follow. We found that participants slowed their moral judgements when judging negatively valenced actions and when expecting contextual updates. Using a diffusion decision model framework, these changes were explained by shifts in drift rate and decision bias (valence) and boundary setting (context), respectively. These findings demonstrate how moral decision caution can be decomposed into distinct aspects of the unfolding decision process.

From the Discussion

Our findings that participants slowed their judgments when expecting contextual information is consistent with previous research showing that people are more cautious when aware that they are more prone to making mistakes. Notably, previous research has demonstrated this effect for decision mistakes in tasks in which people are not given additional information or a chance to change their minds.The current findings show that this effect also extends to dynamic decision-making contexts, in which learning additional information can lead to changes of mind. Crucially, here we show that this type of caution can be explained by the widening of the decision boundary separation in a process model of decision-making.

Sunday, February 14, 2021

Robot sex and consent: Is consent to sex between a robot and a human conceivable, possible, and desirable?

Frank, L., Nyholm, S. 
Artif Intell Law 25, 305–323 (2017).
https://doi.org/10.1007/s10506-017-9212-y

Abstract

The development of highly humanoid sex robots is on the technological horizon. If sex robots are integrated into the legal community as “electronic persons”, the issue of sexual consent arises, which is essential for legally and morally permissible sexual relations between human persons. This paper explores whether it is conceivable, possible, and desirable that humanoid robots should be designed such that they are capable of consenting to sex. We consider reasons for giving both “no” and “yes” answers to these three questions by examining the concept of consent in general, as well as critiques of its adequacy in the domain of sexual ethics; the relationship between consent and free will; and the relationship between consent and consciousness. Additionally we canvass the most influential existing literature on the ethics of sex with robots.

Here is an excerpt:

Here, we want to ask a similar question regarding how and whether sex robots should be brought into the legal community. Our overarching question is: is it conceivable, possible, and desirable to create autonomous and smart sex robots that are able to give (or withhold) consent to sex with a human person? For each of these three sub-questions (whether it is conceivable, possible, and desirable to create sex robots that can consent) we consider both “no” and “yes” answers. We are here mainly interested in exploring these questions in general terms and motivating further discussion. However, in discussing each of these sub-questions we will argue that, prima facie, the “yes” answers appear more convincing than the “no” answers—at least if the sex robots are of a highly sophisticated sort.Footnote4

The rest of our discussion divides into the following sections. We start by saying a little more about what we understand by a “sex robot”. We also say more about what consent is, and we review the small literature that is starting to emerge on our topic (Sect. 1). We then turn to the questions of whether it is conceivable, possible, and desirable to create sex robots capable of giving consent—and discuss “no” and “yes” answers to all of these questions. When we discuss the case for considering it desirable to require robotic consent to sex, we argue that there can be both non-instrumental and instrumental reasons in favor of such a requirement (Sects. 2–4). We conclude with a brief summary (Sect. 5).

Saturday, February 13, 2021

Allocating moral responsibility to multiple agents

Gantman, A. P., Sternisko, A., et al.
Journal of Experimental Social Psychology
Volume 91, November 2020, 

Abstract

Moral and immoral actions often involve multiple individuals who play different roles in bringing about the outcome. For example, one agent may deliberate and decide what to do while another may plan and implement that decision. We suggest that the Mindset Theory of Action Phases provides a useful lens through which to understand these cases and the implications that these different roles, which correspond to different mindsets, have for judgments of moral responsibility. In Experiment 1, participants learned about a disastrous oil spill in which one company made decisions about a faulty oil rig, and another installed that rig. Participants judged the company who made decisions as more responsible than the company who implemented them. In Experiment 2 and a direct replication, we tested whether people judge implementers to be morally responsible at all. We examined a known asymmetry in blame and praise. Moral agents received blame for actions that resulted in a bad outcome but not praise for the same action that resulted in a good outcome. We found this asymmetry for deciders but not implementers, an indication that implementers were judged through a moral lens to a lesser extent than deciders. Implications for allocating moral responsibility across multiple agents are discussed.

Highlights

• Acts can be divided into parts and thereby roles (e.g., decider, implementer).

• Deliberating agent earns more blame than implementing one for a bad outcome.

• Asymmetry in blame vs. praise for the decider but not the implementer

• Asymmetry in blame vs. praise suggests only the decider is judged as moral agent

• Effect is attenuated if decider's job is primarily to implement.

Friday, February 12, 2021

Measuring Implicit Intergroup Biases.

Lai, C. K., & Wilson, M. 
(2020, December 9).

Abstract

Implicit intergroup biases are automatically activated prejudices and stereotypes that may influence judgments of others on the basis of group membership. We review evidence on the measurement of implicit intergroup biases, finding: implicit intergroup biases reflect the personal and the cultural, implicit measures vary in reliability and validity, and implicit measures vary greatly in their prediction of explicit and behavioral outcomes due to theoretical and methodological moderators. We then discuss three challenges to the application of implicit intergroup biases to real‐world problems: (1) a lack of research on social groups of scientific and public interest, (2) developing implicit measures with diagnostic capabilities, and (3) resolving ongoing ambiguities in the relationship between implicit bias and behavior. Making progress on these issues will clarify the role of implicit intergroup biases in perpetuating inequality.

(cut)

Predictive Validity

Implicit intergroup biases are predictive of explicit biases,  behavioral outcomes,  and regional differences in inequality. 

Relationship to explicit prejudice & stereotypes. 

The relationship  between implicit and explicit measures of intergroup bias is consistently positive, but the size  of the relationship depends on the topic.  In a large-scale study of 57 attitudes (Nosek, 2005), the relationship between IAT scores and explicit intergroup attitudes was as high as r= .59 (Democrats vs. Republicans) and as low as r= .33 (European Americans vs. African Americans) or r = .10 (Thin people vs. Fat people). Generally, implicit-explicit relations are lower in studies on intergroup topics than in other topics (Cameron et al., 2012; Greenwald et al., 2009).The  strength  of  the  relationship  between  implicit  and explicit  intergroup  biases  is  moderated  by  factors which have been documented in one large-scale study and  several meta-analyses   (Cameron et al., 2012; Greenwald et al., 2009; Hofmann et al., 2005; Nosek, 2005; Oswald et al., 2013). Much of this work has focused  on  the  IAT,  finding  that  implicit-explicit  relations  are  stronger  when  the  attitude  is  more  strongly elaborated, perceived as distinct from other people, has a  bipolar  structure  (i.e.,  liking  for  one  group  implies disliking  of  the  other),  and  the  explicit  measure  assesses a relative preference rather than an absolute preference (Greenwald et al., 2009; Hofmann et al., 2005; Nosek, 2005).

---------------------
Note: If you are a healthcare professional, you need to be aware of these biases.

Thursday, February 11, 2021

Paranoia and Belief Updating During a Crisis

Suthaharan, P., Reed, E.,  et al. 
(2020, September 4). 

Abstract

The 2019 coronavirus (COVID-19) pandemic has made the world seem unpredictable. During such crises we can experience concerns that others might be against us, culminating perhaps in paranoid conspiracy theories. Here, we investigate paranoia and belief updating in an online sample (N=1,010) in the United States of America (U.S.A). We demonstrate the pandemic increased individuals’ self-rated paranoia and rendered their task-based belief updating more erratic. Local lockdown and reopening policies, as well as culture more broadly, markedly influenced participants’ belief-updating: an early and sustained lockdown rendered people’s belief updating less capricious. Masks are clearly an effective public health measure against COVID-19. However, state-mandated mask wearing increased paranoia and induced more erratic behaviour. Remarkably, this was most evident in those states where adherence to mask wearing rules was poor but where rule following is typically more common. This paranoia may explain the lack of compliance with this simple and effective countermeasure. Computational analyses of participant behaviour suggested that people with higher paranoia expected the task to be more unstable, but at the same time predicted more rewards. In a follow-up study we found people who were more paranoid endorsed conspiracies about mask-wearing and potential vaccines – again, mask attitude and conspiratorial beliefs were associated with erratic task behaviour and changed priors. Future public health responses to the pandemic might leverage these observations, mollifying paranoia and increasing adherence by tempering people’s expectations of other’s behaviour, and the environment more broadly, and reinforcing compliance.