Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Tuesday, February 1, 2022

Network Structure Impacts the Synchronization of Collective Beliefs

Vlasceanu, M., Morais, M. J., & Coman, A. 
(2021). Journal of Cognition and Culture.

Abstract

People’s beliefs are influenced by interactions within their communities. The propagation of this influence through conversational social networks should impact the degree to which community members synchronize their beliefs. To investigate, we recruited a sample of 140 participants and constructed fourteen 10-member communities. Participants first rated the accuracy of a set of statements (pre-test) and were then provided with relevant evidence about them. Then, participants discussed the statements in a series of conversational interactions, following pre-determined network structures (clustered/non-clustered). Finally, they rated the accuracy of the statements again (post- test). The results show that belief synchronization, measuring the increase in belief similarity among individuals within a community from pre-test to post-test, is influenced by the community’s conversational network structure. This synchronization is circumscribed by a degree of separation effect and is equivalent in the clustered and non- clustered networks. We also find that conversational content predicts belief change from pre-test to post-test.

From the Discussion

Understanding the mechanisms by which collective beliefs take shape and change over time is essential from a theoretical perspective (Vlasceanu, Enz, Coman, 2018), but perhaps even more urgent from an applied point of view.  This urgency is fueled by recent findings showing that false news diffuse farther, faster, deeper, and more broadly than true ones in social networks (Vosoughi, Roy, Aral, 2018), and that news can determine what people discuss and even change their beliefs (King, Schneer, White, 2017). And given that beliefs influence people’s behaviors (Shariff & Rhemtulla, 2012; Mangels, Butterfield, Lamb, Good, Dweck, 2006; Ajzen, 1991; Hochbaum, 1958), understanding the dynamics of collective belief formation is of vital social importance as they have the potential to affect some of the most impending threats our society is facing from pandemics (Pennycook, McPhetres, Zhang, Rand, 2020) to climate change (Benegal & Scruggs, 2018). Thus, policy makers could use such findings in designing misinformation reduction campaigns targeting communities (Dovidio & Esses, 2007; Lewandowsky et al., 2012). For instance, these findings suggest such campaigns be sensitive of the conversational network structures of their targeted communities. Knowing how members of these communities are connected, and leveraging the finding that people synchronize their beliefs mainly with individuals they are directly connected to, could inform intervention designers how communities with different connectivity structures might respond to their efforts. For example, when targeting a highly interconnected group, intervention designers could expect that administering the intervention to a few well-connected individuals will have a strong impact at the community level. In contrast, when targeting a less interconnected group, intervention designers could administer the intervention to more central individuals for a comparable effect. 

Monday, January 31, 2022

The future of work: freedom, justice and capital in the age of artificial intelligence

F. S. de Sio, T. Almeida & J. van den Hoven
(2021) Critical Review of International Social
 and Political Philosophy
DOI: 10.1080/13698230.2021.2008204

Abstract

Artificial Intelligence (AI) is predicted to have a deep impact on the future of work and employment. The paper outlines a normative framework to understand and protect human freedom and justice in this transition. The proposed framework is based on four main ideas: going beyond the idea of a Basic Income to compensate the losers in the transition towards AI-driven work, towards a Responsible Innovation approach, in which the development of AI technologies is governed by an inclusive and deliberate societal judgment; going beyond a philosophical conceptualisation of social justice only focused on the distribution of ‘primary goods’, towards one focused on the different goals, values, and virtues of various social practices (Walzer’s ‘spheres of justice’) and the different individual capabilities of persons (Sen’s ‘capabilities’); going beyond a classical understanding of capital, towards one explicitly including mental capacities as a source of value for AI-driven activities. In an effort to promote an interdisciplinary approach, the paper combines political and economic theories of freedom, justice and capital with recent approaches in applied ethics of technology, and starts applying its normative framework to some concrete example of AI-based systems: healthcare robotics, ‘citizen science’, social media and platform economy.

From the Conclusion

Whether or not it will create a net job loss (aka technological unemployment), Artificial Intelligence and digital technologies will change the nature of work, and will have a deep impact on people’s work lives. New political action is needed to govern this transition. In this paper we have claimed that also new philosophical concepts are needed, if the transition has to be governed responsibly and in the interest of everybody. The paper has outlined a general normative framework to make sense of- and address the issue of human freedom and justice in the age of AI at work. The framework is based on four ideas. First, in general freedom and justice cannot be achieved by only protecting existing jobs as a goal in itself, inviting persons to find ways for to remain relevant in a new machine-driven word, or offering financial compensation to those who are (permanently) left unemployed, for instance, via a Universal Basic Income. We should rather prevent technological unemployment and the worsening of working condition to happen, as a result of a Responsible Innovation approach to technology, where freedom and justice are built into the technical and institutional structures of the work of the future. Second, more in particular, we have argued, freedom and justice may be best promoted by a politics and an economics of technology informed by the recognition of different virtues and values as constitutive of different activities, following a Walzerian (‘spheres of justice’) approach to technological and institutional design, possibly integrated by a virtue ethics component. 

Sunday, January 30, 2022

Social proximity and the erosion of norm compliance

Bicchieri, C., Dimant, E., et al.
Games and Economic Behavior
Volume 132, March 2022, Pages 59-72

Abstract

We study how compliance with norms of pro-social behavior is influenced by peers' compliance in a dynamic and non-strategic experimental setting. We show that social proximity among peers is a crucial determinant of the effect. Without social proximity, norm compliance erodes swiftly because participants only conform to observed norm violations while ignoring norm compliance. With social proximity, participants conform to both types of observed behaviors, thus halting the erosion of compliance. Our findings stress the importance of the broader social context for norm compliance and show that, even in the absence of social sanctions, norm compliance can be sustained in repeated interactions, provided there is group identification, as is the case in many natural and online environments.

From the Discussion and conclusion

Social norms are a fundamental component of social and economic life. Therefore, it is important to study conditions under which norm compliance occurs. In this paper, we focused on how observing others' behavior influences individual norm compliance. To investigate this, we designed a non-strategic Take-or-Give (ToG) donation game where people could give to charity, take from it, or abstain from changing the initial allocation between the self and the charity. Using a series of norm-elicitation experiments, we established that most people think taking from the charity is socially inappropriate, whereas abstaining or giving to the charity is appropriate. We then examined the effect of letting individuals observe each other's behavior in a repeated version of the ToG game. Our behavioral results reveal a notable asymmetry in the effect of observing peer behavior: observing other anonymous individuals violating the norm (taking from charity) increased the likelihood that the observers transgress as well. Observing that others donate to charity, however, did not increase donations to the charity. In sum, observing socially inappropriate behavior by anonymous people eroded norm compliance in a way that was not compensated by observing socially appropriate behavior. Our additional experiments show that this partly occurs because observing inappropriate behavior erodes the social norm of giving.

While this asymmetry in reactions paints a bleak picture for norm compliance when other anonymous people can be observed, in most real-world interactions individuals can observe their social proximity to the people they interact with. Assessing similarities with others may bring forth a mechanism of group identification that may promote symmetrical behavioral conformity within the group. The reason for this phenomenon is that the individual may feel that deviations from group behavior, whether positive or negative, signal a lack of commitment to the group. Individuals fear that this may trigger disapproval by other group members. Thus, they will be more vigilant — and responsive — to both examples of socially inappropriate and socially appropriate behavior. 

Saturday, January 29, 2022

Are some cultures more mind-minded in their moral judgements than others?

Barrett H. Clark and Saxe Rebecca R.
2021. Phil. Trans. R. Soc. B3762020028820200288

Abstract

Cross-cultural research on moral reasoning has brought to the fore the question of whether moral judgements always turn on inferences about the mental states of others. Formal legal systems for assigning blame and punishment typically make fine-grained distinctions about mental states, as illustrated by the concept of mens rea, and experimental studies in the USA and elsewhere suggest everyday moral judgements also make use of such distinctions. On the other hand, anthropologists have suggested that some societies have a morality that is disregarding of mental states, and have marshalled ethnographic and experimental evidence in support of this claim. Here, we argue against the claim that some societies are simply less ‘mind-minded’ than others about morality. In place of this cultural main effects hypothesis about the role of mindreading in morality, we propose a contextual variability view in which the role of mental states in moral judgement depends on the context and the reasons for judgement. On this view, which mental states are or are not relevant for a judgement is context-specific, and what appear to be cultural main effects are better explained by culture-by-context interactions.

(cut)

Summing up: Mind-mindedness in context

Our critique of cultural main effects theories, we think, is likely to apply to many domains, not just moral judgement. Dimensions of cultural difference such as the “collectivist / individualist” dimension [50]may capture some small main effects of cultural difference, but we suspect that collectivism / individualism is a parameter than can be flipped contextually within societies to a much greater degree than it varies as a main effect across societies. We may be collectivist within families, for example, but individualist at work. Similarly, we suggest that everywhere there are contexts in which one’s mental states may be deemed morally irrelevant, and others where they aren’t. Such judgements vary not just across contexts, but across individuals and time. What we argue against, then, is thinking of mindreading as a resource that is scarce in some places and plentiful in others. Instead, we should think about it as a resource that is available everywhere, and whose use in moral judgement depends on a multiplicity of factors, including social norms but also, importantly, the reasons for which people are making judgements. Cognitive resources such as theory of mind might best be seen as ingredients that can be combined in different ways across people, places, and situations. On this view, the space of moral judgements represents a mosaic of variously combined ingredients.

Friday, January 28, 2022

The AI ethicist’s dilemma: fighting Big Tech by supporting Big Tech

Sætra, H.S., Coeckelbergh, M. & Danaher, J. 
AI Ethics (2021). 
https://doi.org/10.1007/s43681-021-00123-7

Abstract

Assume that a researcher uncovers a major problem with how social media are currently used. What sort of challenges arise when they must subsequently decide whether or not to use social media to create awareness about this problem? This situation routinely occurs as ethicists navigate choices regarding how to effect change and potentially remedy the problems they uncover. In this article, challenges related to new technologies and what is often referred to as ‘Big Tech’ are emphasized. We present what we refer to as the AI ethicist’s dilemma, which emerges when an AI ethicist has to consider how their own success in communicating an identified problem is associated with a high risk of decreasing the chances of successfully remedying the problem. We examine how the ethicist can resolve the dilemma and arrive at ethically sound paths of action through combining three ethical theories: virtue ethics, deontological ethics and consequentialist ethics. The article concludes that attempting to change the world of Big Tech only using the technologies and tools they provide will at times prove to be counter-productive, and that political and other more disruptive avenues of action should also be seriously considered by ethicists who want to effect long-term change. Both strategies have advantages and disadvantages, and a combination might be desirable to achieve these advantages and mitigate some of the disadvantages discussed.

From the Discussion

The ethicist’s dilemma arises as soon as the desire to effect change is seemingly most easily satisfied using the very systems that needs changing. In this article it is shown that the dilemma involves either strengthening the system by attempting to harness its powers, or potentially not achieving anything by relinquishing the means of using technology to spread one’s message. An environmental ethicist who is sincerely concerned about the effects of climate change could start working as an ethics officer for Big Oil, but there is a chance that doing so may ‘trap’ them in both a logic and an incentive structure that make real change hard to achieve. An AI ethicist contemplating the dangers of new technologies is faced with a similar problem, when they are, for example, offered a lucrative job at a Big Tech company with a quite uncertain future outside the mainstream as the only alternative.

Turning to the practicalities of change, some rightly argue that political power is dangerous [61]. Furthermore, they might argue that private initiative and innovation is the key to the good life and human welfare. However, the dangers of technology and unbridled innovation are also real. At least according to the ethicists. And if they are serious about these dangers, it may be necessary to emphasise the political domain and its power to disrupt the technological system. The dangers of private power must be bridled by the power of government, and this is in a sense a liberal argument in favour of more active use of government power [1]. Private companies generate a range of problems, and when these are understood as problems resulting from a too free market, government intervention for the sake of correcting market failure is normally acceptable to those of the left and right wings of politics alike. 

Thursday, January 27, 2022

Many heads are more utilitarian than one

Keshmirian, A., Deroy, O, & Bahrami, B.
Cognition
Volume 220, March 2022, 104965

Abstract

Moral judgments have a very prominent social nature, and in everyday life, they are continually shaped by discussions with others. Psychological investigations of these judgments, however, have rarely addressed the impact of social interactions. To examine the role of social interaction on moral judgments within small groups, we had groups of 4 to 5 participants judge moral dilemmas first individually and privately, then collectively and interactively, and finally individually a second time. We employed both real-life and sacrificial moral dilemmas in which the character's action or inaction violated a moral principle to benefit the greatest number of people. Participants decided if these utilitarian decisions were morally acceptable or not. In Experiment 1, we found that collective judgments in face-to-face interactions were more utilitarian than the statistical aggregate of their members compared to both first and second individual judgments. This observation supported the hypothesis that deliberation and consensus within a group transiently reduce the emotional burden of norm violation. In Experiment 2, we tested this hypothesis more directly: measuring participants' state anxiety in addition to their moral judgments before, during, and after online interactions, we found again that collectives were more utilitarian than those of individuals and that state anxiety level was reduced during and after social interaction. The utilitarian boost in collective moral judgments is probably due to the reduction of stress in the social setting.

Highlights

• Collective consensual judgments made via group interactions were more utilitarian than individual judgments.

• Group discussion did not change the individual judgments indicating a normative conformity effect.

• Individuals consented to a group judgment that they did not necessarily buy into personally.

• Collectives were less stressed than individuals after responding to moral dilemmas.

• Interactions reduced aversive emotions (e.g., stressed) associated with violation of moral norms.

From the Discussion

Our analysis revealed that groups, in comparison to individuals, are more utilitarian in their moral judgments. Thus, our findings are inconsistent with Virtue-Signaling (VS), which proposed the opposite
effect. Crucially, the collective utilitarian boost was short-lived: it was only seen at the collective level and not when participants rated the same questions individually again. Previous research shows that moral change at the individual level, as the result of social deliberation, is rather long-lived and not transient (e.g., see Ueshima et al., 2021). Thus, this collective utilitarian boost could not have resulted from deliberation and reasoning or due to conscious application of utilitarian principles with authentic reasons to maximize the total good. If this was the case, the effect would have persisted in the second individual judgment as well. That was not what we observed. Consequently, our findings are inconsistent with the Social Deliberation (SD) hypotheses.

Wednesday, January 26, 2022

Threat Rejection Fuels Political Dehumanization

Kubin, E., Kachanoff, F., & Gray, K. 
(2021, December 4).

Abstract

Americans disagree about many things, including what threats are most pressing. We suggest people morally condemn and dehumanize opponents when they are perceived as rejecting the existence or severity of important perceived threats. We explore perceived “threat rejection” across five studies (N=2,404) both in the real-world COVID-19 pandemic and in novel contexts. Americans morally condemned and dehumanized policy opponents when they seemed to reject realistic group threats (e.g., threat to the physical health or resources of the group). Believing opponents rejected symbolic group threats (e.g., to collective identity) was not reliably linked to condemnation and dehumanization. Importantly, the political dehumanization caused by perceived threat rejection can be soothed with a “threat acknowledgement” intervention.

General Discussion 

Does perceived threat rejection sow political divisions? Results suggest perceiving the “other side” as rejecting realistic (more than symbolic) threat increases moral condemnation and dehumanization, lending support to the asymmetry hypothesis. DuringCOVID-19, those who relatively favored social distancing saw opponents as rejecting realistic threats and morally judged and dehumanized them. In contrast, support for social distancing did not reliably relate to perceiving the other side as rejecting symbolic threat—and symbolic threat was not robustly associated with moral judgment or dehumanization.

Within a novel threat context, people who were more willing to sacrifice their group’s culture to prevent realistic threats to health or resources viewed opponents as rejecting realistic threats and in turn morally condemned and dehumanized them. Similarly, people who were more willing to endure realistic threat to protect their culture, viewed opponents as rejecting symbolic threats, in turn morally condemning and dehumanizing them, yet these effects were significantly weaker than for realistic threat rejection. Our findings are consistent with research suggesting people condemn behaviors which are perceived as causing concrete (realistic) harm rather than abstract (symbolic) harm (Schein & Gray 2018).

Using a threat-acknowledgement-intervention, we decreased the tendency of people who tended to prioritize protecting the group from realistic threat (i.e., those who tended to support social distancing)to morally judge and dehumanize opponents who prioritized protecting the group from symbolic threat (i.e., those who tended to resist social distancing). Our intervention did not require opponents to compromise their stance –this intervention worked by simply having opponents acknowledge both realistic and symbolic threats when providing a rationale for their position. 

--------------------
Note: Helpful research when working with politically intense patients who frequently bring in partisan information to discuss in psychotherapy.

Tuesday, January 25, 2022

Sexbots as Synthetic Companions: Comparing Attitudes of Official Sex Offenders and Non-Offenders.

Zara, G., Veggi, S. & Farrington, D.P. 
Int J of Soc Robotics (2021). 

Abstract

This is the first Italian study to examine views on sexbots of adult male sex offenders and non-offenders, and their perceptions of sexbots as sexual partners, and sexbots as a means to prevent sexual violence. In order to explore these aspects 344 adult males were involved in the study. The study carried out two types of comparisons. 100 male sex offenders were compared with 244 male non-offenders. Also, sex offenders were divided into child molesters and rapists. Preliminary findings suggest that sex offenders were less open than non-offenders to sexbots, showed a lower acceptance of them, and were more likely to dismiss the possibility of having an intimate and sexual relationship with a sexbot. Sex offenders were also less likely than non-offenders to believe that the risk of sexual violence against people could be reduced if a sexbot was used in the treatment of sex offenders. No differences were found between child molesters and rapists. Though no definitive conclusion can be drawn about what role sexbots might play in the prevention and treatment of sex offending, this study emphasizes the importance of both exploring how sexbots are both perceived and understood. Sex offenders in this study showed a high dynamic sexual risk and, paradoxically, despite, or because of, their sexual deviance (e.g. deficits in sexual self-regulation), they were more inclined to see sexbots as just machines and were reluctant to imagine them as social agents, i.e. as intimate or sexual arousal partners. How sex offenders differ in their dynamic risk and criminal careers can inform experts about the mechanisms that take place and can challenge their engagement in treatment and intervention.

From the Discussion

Being in a Relationship with a Sexbot: a Comparison Between Sex Offenders and Non-Offenders
Notwithstanding that previous studies suggest that those who are quite open in admitting their interest in having a relationship with a sexbot were not necessarily problematic in terms of psycho-sexual functioning and life satisfaction, some anecdotal evidence seems to indicate otherwise. In this study, sex offenders were more reluctant to speak about their preferences towards sexbots. While male non-offenders appeared to be open to sexbots and quite eager to imagine themselves having a relationship with a sexbot or having sexual intercourse with one of them, sex offenders were reluctant to admit any interest towards sexbots. No clinical data are available to support the assumption about whether the interaction with sexbots is in any way egodystonic (inconsistent with one’s ideal self) or egosyntonic (consistent with one’s ideal self). Thus, no-one can discount the influence of being in detention upon the offenders’ willingness to feel at ease in expressing their views. It is not unusual that, when in detention, offenders may put up a front. This might explain why the sex offenders in this study kept a low profile on sex matters (e.g. declaring that “sexbots are not for me, I’m not a pervert”, to use their words). Sexuality is a dirty word for sex offenders in detention and their willingness to be seen as reformed and «sexually normal» is what perhaps motivated them to deny that they had any form of curiosity or attraction for any sexbot presented to them.

Monday, January 24, 2022

Children Prioritize Humans Over Animals Less Than Adults Do

Wilks M, Caviola L, Kahane G, Bloom P.
Psychological Science. 2021;32(1):27-38. 
doi:10.1177/0956797620960398

Abstract

Is the tendency to morally prioritize humans over animals weaker in children than adults? In two preregistered studies (total N = 622), 5- to 9-year-old children and adults were presented with moral dilemmas pitting varying numbers of humans against varying numbers of either dogs or pigs and were asked who should be saved. In both studies, children had a weaker tendency than adults to prioritize humans over animals. They often chose to save multiple dogs over one human, and many valued the life of a dog as much as the life of a human. Although they valued pigs less, the majority still prioritized 10 pigs over one human. By contrast, almost all adults chose to save one human over even 100 dogs or pigs. Our findings suggest that the common view that humans are far more morally important than animals appears late in development and is likely socially acquired.

From the Discussion section

What are the origins of this tendency? One possibility is that it is an unlearned preference. For much of human history, animals played a central role in human life—whether as a threat or as a resource. It therefore seems possible that humans would develop distinctive psychological mechanisms for thinking about animals. Even if there are no specific cognitive adaptations for thinking about animals, it is hardly surprising that humans prefer humans over animals—similar to their preference for tribe members over strangers. Similarly, given that in-group favoritism in human groups (e.g., racism, sexism, minimal groups) tends to emerge as early as preschool years (Buttelmann & Böhm, 2014), one would expect that a basic tendency to prioritize humans over animals also emerges early.

But we would suggest that the much stronger tendency to prioritize humans over animals in adults has a different source that, given the lack of correlation between age and speciesism in children, emerges late in development. Adolescents may learn and internalize the socially held speciesist notion—or ideology—that humans are morally special and deserve full moral status, whereas animals do not.