Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Monday, August 2, 2021

Landmark research integrity survey finds questionable practices are surprisingly common

Jop De Vrieze
Science Magazine
Originally posted 7 Jul 21

More than half of Dutch scientists regularly engage in questionable research practices, such as hiding flaws in their research design or selectively citing literature, according to a new study. And one in 12 admitted to committing a more serious form of research misconduct within the past 3 years: the fabrication or falsification of research results.

This rate of 8% for outright fraud was more than double that reported in previous studies. Organizers of the Dutch National Survey on Research Integrity, the largest of its kind to date, took special precautions to guarantee the anonymity of respondents for these sensitive questions, says Gowri Gopalakrishna, the survey’s leader and an epidemiologist at Amsterdam University Medical Center (AUMC). “That method increases the honesty of the answers,” she says. “So we have good reason to believe that our outcome is closer to reality than that of previous studies.” The survey team published results on 6 July in two preprint articles, which also examine factors that contribute to research misconduct, on MetaArxiv.

When the survey began last year, organizers invited more than 60,000 researchers to take part—those working across all fields of research, both science and the humanities, at some 22 Dutch universities and research centers. However, many institutions refused to cooperate for fear of negative publicity, and responses fell short of expectations: Only about 6800 completed surveys were received. Still, that’s more responses than any previous research integrity survey, and the response rate at the participating universities was 21%—in line with previous surveys.

One of the preprints focuses on the prevalence of misbehavior—cases of fraud as well as a less severe category of “questionable research practices,” such as carelessly assessing the work of colleagues, poorly mentoring junior researchers, or selectively citing scientific literature. The other article focuses on responsible behavior; this includes correcting one’s own published errors, sharing research data, and “preregistering” experiments—posting hypotheses and protocols ahead of time to reduce the bias that can arise when these are released after data collection.

Sunday, August 1, 2021

Understanding, explaining, and utilizing medical artificial intelligence

Cadario, R., Longoni, C. & Morewedge, C.K. 
Nat Hum Behav (2021). 


Medical artificial intelligence is cost-effective and scalable and often outperforms human providers, yet people are reluctant to use it. We show that resistance to the utilization of medical artificial intelligence is driven by both the subjective difficulty of understanding algorithms (the perception that they are a ‘black box’) and by an illusory subjective understanding of human medical decision-making. In five pre-registered experiments (1–3B: N = 2,699), we find that people exhibit an illusory understanding of human medical decision-making (study 1). This leads people to believe they better understand decisions made by human than algorithmic healthcare providers (studies 2A,B), which makes them more reluctant to utilize algorithmic than human providers (studies 3A,B). Fortunately, brief interventions that increase subjective understanding of algorithmic decision processes increase willingness to utilize algorithmic healthcare providers (studies 3A,B). A sixth study on Google Ads for an algorithmic skin cancer detection app finds that the effectiveness of such interventions generalizes to field settings (study 4: N = 14,013).

From the Discussion

Utilization of algorithmic-based healthcare services is becoming critical with the rise of telehealth service, the current surge in healthcare demand and long-term goals of providing affordable and high-quality healthcare in developed and developing nations. Our results yield practical insights for reducing reluctance to utilize medical AI. Because the technologies used in algorithmic-based medical applications are complex, providers tend to present AI provider decisions as a ‘black box’. Our results underscore the importance of recent policy recommendations to open this black box to patients and users. A simple one-page visual or sentence that explains the criteria or process used to make medical decisions increased acceptance of an algorithm-based skin cancer diagnostic tool, which could be easily adapted to other domains and procedures.

Given the complexity of the process by which medical AI makes decisions, firms now tend to emphasize the outcomes that algorithms produce in their marketing to consumers, which feature benefits such as accuracy, convenience and rapidity (performance), while providing few details about how algorithms work (process). Indeed, in an ancillary study examining the marketing of skin cancer smartphone applications (Supplementary Appendix 8), we find that performance-related keywords were used to describe 57–64% of the applications, whereas process-related keywords were used to describe 21% of the applications. Improving subjective understanding of how medical AI works may then not only provide beneficent insights for increasing consumer adoption but also for firms seeking to improve their positioning. Indeed, we find increased advertising efficacy for SkinVision, a skin cancer detection app, when advertising included language explaining how it works.

Saturday, July 31, 2021

Stewardship of global collective behavior

Bak-Colman, J.B., et al.
Proceedings of the National Academy of Sciences 
Jul 2021, 118 (27) e2025764118
DOI: 10.1073/pnas.2025764118


Collective behavior provides a framework for understanding how the actions and properties of groups emerge from the way individuals generate and share information. In humans, information flows were initially shaped by natural selection yet are increasingly structured by emerging communication technologies. Our larger, more complex social networks now transfer high-fidelity information over vast distances at low cost. The digital age and the rise of social media have accelerated changes to our social systems, with poorly understood functional consequences. This gap in our knowledge represents a principal challenge to scientific progress, democracy, and actions to address global crises. We argue that the study of collective behavior must rise to a “crisis discipline” just as medicine, conservation, and climate science have, with a focus on providing actionable insight to policymakers and regulators for the stewardship of social systems.


Human collective dynamics are critical to the wellbeing of people and ecosystems in the present and will set the stage for how we face global challenges with impacts that will last centuries. There is no reason to suppose natural selection will have endowed us with dynamics that are intrinsically conducive to human wellbeing or sustainability. The same is true of communication technology, which has largely been developed to solve the needs of individuals or single organizations. Such technology, combined with human population growth, has created a global social network that is larger, denser, and able to transmit higher-fidelity information at greater speed. With the rise of the digital age, this social network is increasingly coupled to algorithms that create unprecedented feedback effects.

Insight from across academic disciplines demonstrates that past and present changes to our social networks will have functional consequences across scales of organization. Given that the impacts of communication technology will transcend disciplinary lines, the scientific response must do so as well. Unsafe adoption of technology has the potential to both threaten wellbeing in the present and have lasting consequences for sustainability. Mitigating risk to ourselves and posterity requires a consolidated, crisis-focused study of human collective behavior.

Such an approach can benefit from lessons learned in other fields, including climate change and conservation biology, which are likewise required to provide actionable insight without the benefit of a complete understanding of the underlying dynamics. Integrating theoretical, descriptive, and empirical approaches will be necessary to bridge the gap between individual and large-scale behavior. There is reason to be hopeful that well-designed systems can promote healthy collective action at scale, as has been demonstrated in numerous contexts including the development of open-sourced software, curating Wikipedia, and the production of crowd-sourced maps. These examples not only provide proof that online collaboration can be productive, but also highlight means of measuring and defining success. Research in political communications has shown that while online movements and coordination are often prone to failure, when they succeed, the results can be dramatic. Quantifying benefits of online interaction, and limitations to harnessing these benefits, is a necessary step toward revealing the conditions that promote or undermine the value of communication technology.

Friday, July 30, 2021

The Impact of Ignorance Beyond Causation: An Experimental Meta-Analysis

L. Kirfel & J. P. Phillips


Norm violations have been demonstrated to impact a wide range of seemingly non-normative judgments. Among other things, when agents’ actions violate prescriptive norms they tend to be seen as having done those actions more freely, as having acted more intentionally, as being more of a cause of subsequent outcomes, and even as being less happy. The explanation of this effect continue to be debated, with some researchers appealing to features of actions that violate norms, and other researcher emphasizing the importance of agents’ mental states when acting. Here, we report the results of a large-scale experiment that replicates and extends twelve of the studies that originally demonstrated the pervasive impact of norm violations. In each case, we build on the pre-existing experimental paradigms to additionally manipulate whether the agents knew that they were violating a norm while holding fixed the action done. We find evidence for a pervasive impact of ignorance: the impact of norm violations on nonnormative judgments depends largely on the agent knowing that they were violating a norm when acting.

From the Discussion

Norm violations have been previously demonstrated to influence a wide range of intuitive judgments, including judgments of causation, freedom, happiness, doing vs. allowing, mental state ascriptions, and modal claims. A continuing debate centers on why normality has such a pervasive impact, and whether one should attempt to offer a unified explanation of these various effects (Hindriks, 2014).

At the broadest level, the current results demonstrate that the pervasive impact of normality likely warrants a unified explanation at some level. Across a wide range of intuitive judgments and highly different manipulations of an agents’ knowledge, we found that the impact of normality on nonnormative judgments was diminished when the agent did not know that they were violating a norm. That is, we found evidence for a correspondingly pervasive impact of ignorance.

Thursday, July 29, 2021

Technology in the Age of Innovation: Responsible Innovation as a New Subdomain Within the Philosophy of Technology

von Schomberg, L., Blok, V. 
Philos. Technol. 34, 309–323 (2021). 


Praised as a panacea for resolving all societal issues, and self-evidently presupposed as technological innovation, the concept of innovation has become the emblem of our age. This is especially reflected in the context of the European Union, where it is considered to play a central role in both strengthening the economy and confronting the current environmental crisis. The pressing question is how technological innovation can be steered into the right direction. To this end, recent frameworks of Responsible Innovation (RI) focus on how to enable outcomes of innovation processes to become societally desirable and ethically acceptable. However, questions with regard to the technological nature of these innovation processes are rarely raised. For this reason, this paper raises the following research question: To what extent is RI possible in the current age, where the concept of innovation is predominantly presupposed as technological innovation? On the one hand, we depart from a post-phenomenological perspective to evaluate the possibility of RI in relation to the particular technological innovations discussed in the RI literature. On the other hand, we emphasize the central role innovation plays in the current age, and suggest that the presupposed concept of innovation projects a techno-economic paradigm. In doing so, we ultimately argue that in the attempt to steer innovation, frameworks of RI are in fact steered by the techno-economic paradigm inherent in the presupposed concept of innovation. Finally, we account for what implications this has for the societal purpose of RI.

The Conclusion

Hence, even though RI provides a critical analysis of innovation at the ontic level (i.e., concerning the introduction and usage of particular innovations), it still lacks a critical analysis at the ontological level (i.e., concerning the techno-economic paradigm of innovation). Therefore, RI is in need of a fundamental reflection that not only exposes the techno-economic paradigm of innovation—which we did in this paper—but that also explores an alternative concept of innovation which addresses the public good beyond the current privatization wave. The political origins of innovation that we encountered in Section 2, along with the political ends that the RI literature explicitly prioritizes, suggest that we should inquire into a political orientation of innovation. A crucial task of this inquiry would be to account for what such a political orientation of innovation precisely entails at the ontic level, and how it relates to the current techno-economic paradigm of innovation at the ontological level.

Wednesday, July 28, 2021

Hostile and benevolent sexism: The differential roles of human supremacy beliefs, women’s connection to nature, and the dehumanization of women

Salmen, A., & Dhont, K. (2020). 
Group Processes & Intergroup Relations. 


Scholars have long argued that sexism is partly rooted in dominance motives over animals and nature, with women being perceived as more animal-like and more closely connected to nature than men. Yet systematic research investigating these associations is currently lacking. Five studies (N = 2,409) consistently show that stronger beliefs in human supremacy over animals and nature were related to heightened hostile and benevolent sexism. Furthermore, perceiving women as more closely connected to nature than men was particularly associated with higher benevolent sexism, whereas subtle dehumanization of women was uniquely associated with higher hostile sexism. Blatant dehumanization predicted both types of sexism. Studies 3 and 4 highlight the roles of social dominance orientation and benevolent beliefs about nature underpinning these associations, while Study 5 demonstrates the implications for individuals’ acceptance of rape myths and policies restricting pregnant women’s freedom. Taken together, our findings reveal the psychological connections between gender relations and human–animal relations.

Implications and Conclusions

Scholars have argued that in order to effectively combat oppression, different forms of prejudice cannot be seen in isolation, but their interdependency needs to be understood (Adams, 1990/2015; 1994/2018; Adams & Gruen, 2014; C. A. MacKinnon, 2004). Based on the present findings, it can be argued that the objectification of women in campaigns to promote animal rights not only expresses sexist messages, but may be ineffective in addressing animal suffering (see also Bongiorno et al., 2013). Indeed, it may reinforce superiority beliefs in both human intergroup and human–animal relations. Along similar lines, our findings raise important questions regarding the frequent use of media images depicting women in an animalistic way or together with images of nature (e.g., Adams, 1990/2015; Plous & Neptune, 1997; Reynolds & Haslam, 2011). Through strengthening the association of women with animals and nature, exposure to these images might increase and maintain benevolent and hostile sexism.

Taken together, by showing that the way people think about animals is associated with exploitative views about women, our findings move beyond traditional psychological theorizing on gender-based bias and provide empirical support for the ideas of feminist scholars that, on a psychological level, systems of oppression and exploitation of women and animals are closely connected.

Tuesday, July 27, 2021

Forms and Functions of the Social Emotions

Sznycer, D., Sell, A., & Lieberman, D. (2021). 
Current Directions in Psychological Science. 


In engineering, form follows function. It is therefore difficult to understand an engineered object if one does not examine it in light of its function. Just as understanding the structure of a lock requires understanding the desire to secure valuables, understanding structures engineered by natural selection, including emotion systems, requires hypotheses about adaptive function. Social emotions reliably solved adaptive problems of human sociality. A central function of these emotions appears to be the recalibration of social evaluations in the minds of self and others. For example, the anger system functions to incentivize another individual to value your welfare more highly when you deem the current valuation insufficient; gratitude functions to consolidate a cooperative relationship with another individual when there are indications that the other values your welfare; shame functions to minimize the spread of discrediting information about yourself and the threat of being devalued by others; and pride functions to capitalize on opportunities to become more highly valued by others. Using the lens of social valuation, researchers are now mapping these and other social emotions at a rapid pace, finding striking regularities across industrial and small-scale societies and throughout history.

From the Shame portion

The behavioral repertoire of shame is broad. From the perspective of the disgraced or to-be-disgraced individual, a trait (e.g., incompetence) or course of action (e.g., theft) that fellow group members view negatively can be shielded from others’ censure at each of various junctures: imagination, decision making, action, information diffusion within the community, and audience reaction. Shame appears to have authority over devaluation-minimizing responses relevant to each of these junctures. For example, shame can lead people to turn away from courses of actions that might lead others to devalue them, to interrupt their execution of discrediting actions, to conceal and destroy reputationally damaging information about themselves, and to hide. When an audience finds discrediting information about the focal individual and condemns or attacks that individual, the shamed individual may apologize, signal submission, appease, cooperate, obfuscate, lie, shift the blame to others, or react with aggression. These behaviors are heterogeneous from a tactical standpoint; some even work at cross-purposes if mobilized concurrently. But each of these behaviors appears to have the strategic potential to limit the threat of devaluation in certain contexts, combinations, or sequences.

Such shame-inspired behaviors as hiding, scapegoating, and aggressing are undesirable from the standpoint of victims and third parties. This has led to the view that shame is an ugly and maladaptive emotion (Tangney et al., 1996). However, note that those behaviors can enhance the welfare of the focal individual, who is pressed to escape detection and minimize or counteract devaluation by others. Whereas the consequences of social devaluation are certainly ugly for the individual being devalued, the form-function approach suggests instead that shame is an elegantly engineered system that transmits bad news of the potential for devaluation to the array of counter-devaluation responses available to the focal individual.

Important data points to share with trainees.  A good refreshed for seasoned therapists.

Monday, July 26, 2021

Do doctors engaging in advocacy speak for themselves or their profession?

Elizabeth Lanphier
Journal of Medical Ethics Blog
Originally posted 17 June 21

Here is an excerpt:

My concern is not the claim that expertise should be shared. (It should!) Nor do I think there is any neat distinction between physician responsibilities for individual health and public health. But I worry that when Strous and Karni alternately frame physician duties to “speak out” as individual duties and collective ones, they collapse necessary distinctions between the risks, benefits, and demands of these two types of obligations.

Many of us have various role-based individual responsibilities. We can have obligations as a parent, as a citizen, or as a professional. Having an individual responsibility as a physician involves duties to your patients, but also general duties to care in the event you are in a situation in which your expertise is needed (the “is there a doctor on this flight?” scenario).

Collective responsibility, on the other hand, is when a group has a responsibility as a group. The philosophical literature debates hard to resolve questions about what it means to be a “group,” and how groups come to have or discharge responsibilities. Collective responsibility raises complicated questions like: If physicians have a collective responsibility to speak out during the COVID-19 pandemic, does every physician has such an obligation? Does any individual physician?

Because individual obligations attribute duties to specific persons responsible for carrying them out in ways collective duties tend not to, I why individual physician obligations are attractive. But this comes with risks. One risk is that a physician speaks out as an individual, appealing to the authority of their medical credentials, but not in alignment with their profession.

In my essay I describe a family physician inviting his extended family for a holiday meal during a peak period of SARS-CoV-2 transmission because he didn’t think COVID-19 was a “big deal.”

More infamously, Dr. Scott Atlas served as Donald J. Trump’s coronavirus advisor, and although he is a physician, he did not have experience in public health, infectious disease, or critical care medicine applicable to COVID-19. Atlas was a physician speaking as a physician, but he routinely promoted views starkly different than those of physicians with expertise relevant to the pandemic, and the guidance coming from scientific and medical communities.

Sunday, July 25, 2021

Should we be concerned that the decisions of AIs are inscrutable?

John Zerilli
Originally published 14 June 21

Here is an excerpt:

However, there’s a danger of carrying reliabilist thinking too far. Compare a simple digital calculator with an instrument designed to assess the risk that someone convicted of a crime will fall back into criminal behaviour (‘recidivism risk’ tools are being used all over the United States right now to help officials determine bail, sentencing and parole outcomes). The calculator’s outputs are so dependable that an explanation of them seems superfluous – even for the first-time homebuyer whose mortgage repayments are determined by it. One might take issue with other aspects of the process – the fairness of the loan terms, the intrusiveness of the credit rating agency – but you wouldn’t ordinarily question the engineering of the calculator itself.

That’s utterly unlike the recidivism risk tool. When it labels a prisoner as ‘high risk’, neither the prisoner nor the parole board can be truly satisfied until they have some grasp of the factors that led to it, and the relative weights of each factor. Why? Because the assessment is such that any answer will necessarily be imprecise. It involves the calculation of probabilities on the basis of limited and potentially poor-quality information whose very selection is value-laden.

But what if systems such as the recidivism tool were in fact more like the calculator? For argument’s sake, imagine a recidivism risk-assessment tool that was basically infallible, a kind of Casio-cum-Oracle-of-Delphi. Would we still expect it to ‘show its working’?

This requires us to think more deeply about what it means for an automated decision system to be ‘reliable’. It’s natural to think that such a system would make the ‘right’ recommendations, most of the time. But what if there were no such thing as a right recommendation? What if all we could hope for were only a right way of arriving at a recommendation – a right way of approaching a given set of circumstances? This is a familiar situation in law, politics and ethics. Here, competing values and ethical frameworks often produce very different conclusions about the proper course of action. There are rarely unambiguously correct outcomes; instead, there are only right ways of justifying them. This makes talk of ‘reliability’ suspect. For many of the most morally consequential and controversial applications of ML, to know that an automated system works properly just is to know and be satisfied with its reasons for deciding.

Saturday, July 24, 2021

Freezing Eggs and Creating Patients: Moral Risks of Commercialized Fertility

E. Reis & S. Reis-Dennis
The Hastings Center Report
Originally published 24 Nov 17


There's no doubt that reproductive technologies can transform lives for the better. Infertile couples and single, lesbian, gay, intersex, and transgender people have the potential to form families in ways that would have been inconceivable years ago. Yet we are concerned about the widespread commercialization of certain egg-freezing programs, the messages they propagate about motherhood, the way they blur the line between care and experimentation, and the manipulative and exaggerated marketing that stretches the truth and inspires false hope in women of various ages. We argue that although reproductive technology, and egg freezing in particular, promise to improve women's care by offering more choices to achieve pregnancy and childbearing, they actually have the potential to be disempowering. First, commercial motives in the fertility industry distort women's medical deliberations, thereby restricting their autonomy; second, having the option to freeze their eggs can change the meaning of women's reproductive choices in a way that is limiting rather than liberating.

Here is an excerpt:

Egg banks are offering presumably fertile women a solution for potential infertility that they may never face. These women might pay annual egg-freezing storage rates but never use their eggs. In fact, even if a woman who froze eggs in her early twenties waited until her late thirties to use them, there can be no guarantee that those eggs would produce a viable pregnancy. James A. Grifo, program director of NYU Langone Health Fertility Center, has speculated, “[T]here have been reports of embryos that have been frozen for over 15 years making babies, and we think the same thing is going to be true of eggs.” But the truth is that the technology is so new that neither he nor we know how frozen eggs will hold up over a long period of time.

Some women in their twenties might want to hedge their bets against future infertility by freezing their eggs as a part of an egg-sharing program; others might hope to learn from a simple home test of hormone levels whether their egg supply (ovarian reserve) is low—a relatively rare condition. However, these tests are not foolproof. The ASRM has cautioned against home tests of ovarian reserve for women in their twenties because it may lead to “false reassurance or unnecessary anxiety and concern.” This kind of medicalization of fertility may not be liberating; instead, it will exert undue pressure on women and encourage them to rely on egg freezing over other reproductive options when it is far from guaranteed that those frozen eggs (particularly if the women have the condition known as premature ovarian aging) will ultimately lead to successful pregnancies and births.

Friday, July 23, 2021

Women Carry An Undue Mental Health Burden. They Shouldn’t Have To

Rawan Hamadeh
Ms. Magazine
Originally posted 12 June 21

Here is an excerpt:

In developing countries, there is a huge gap in the availability and accessibility of specialized mental health services. Rather than visiting mental health specialists, women are more likely to seek mental health support in primary health care settings while accompanying their children or while attending consultations for other health issues. This leads to many mental health conditions going unidentified and therefore not treated. Often, women do not feel fully comfortable disclosing certain psychological and emotional distress because they fear stigmatization, confidentiality breaches or not being taken seriously.

COVID-19 has put the mental well-being of the entire world at risk. More adults are reporting struggles with mental health and substance use and are experiencing more symptoms of anxiety and depressive disorders. The stressors caused by the pandemic have affected the entire population; however, the effect on women and mothers specifically has been greater.

Women, the unsung heroes of the pandemic, face mounting pressures amid this global health crisis. Reports suggest that the long-term repercussions of COVID-19 could undo decades of progress for women and impose considerable additional burdens on them, threatening the difficult journey toward gender equality.

Unemployment, parenting responsibilities, homeschooling or caring for sick relatives are all additional burdens on women’s daily lives during the pandemic. It’s also important that we acknowledge the exponential need for mental health support for health care workers, and particularly health care mothers, who are juggling both their professional duties and their parenting responsibilities. They are the heroes on the front lines of the fight against the virus, and it’s crucial to prioritize their physical as well as their mental health.

Thursday, July 22, 2021

The Possibility of an Ongoing Moral Catastrophe

Williams, E.G. (2015).
Ethic Theory Moral Prac 18, 
971–982 (2015). 


This article gives two arguments for believing that our society is unknowingly guilty of serious, large-scale wrongdoing. First is an inductive argument: most other societies, in history and in the world today, have been unknowingly guilty of serious wrongdoing, so ours probably is too. Second is a disjunctive argument: there are a large number of distinct ways in which our practices could turn out to be horribly wrong, so even if no particular hypothesized moral mistake strikes us as very likely, the disjunction of all such mistakes should receive significant credence. The article then discusses what our society should do in light of the likelihood that we are doing something seriously wrong: we should regard intellectual progress, of the sort that will allow us to find and correct our moral mistakes as soon as possible, as an urgent moral priority rather than as a mere luxury; and we should also consider it important to save resources and cultivate flexibility, so that when the time comes to change our policies we will be able to do so quickly and smoothly.

Wednesday, July 21, 2021

The Parliamentary Approach to Moral Uncertainty

Toby Newberry & Toby Ord
Future of Humanity Institute
University of Oxford 2021


We introduce a novel approach to the problem of decision-making under moral uncertainty, based
on an analogy to a parliament. The appropriate choice under moral uncertainty is the one that
would be reached by a parliament comprised of delegates representing the interests of each moral
theory, who number in proportion to your credence in that theory. We present what we see as the
best specific approach of this kind (based on proportional chances voting), and also show how the
parliamentary approach can be used as a general framework for thinking about moral uncertainty,
where extant approaches to addressing moral uncertainty correspond to parliaments with different
rules and procedures.

Here is an excerpt:

Moral Parliament

Imagine that each moral theory in which you have credence got to send delegates to an internal parliament, where the number of delegates representing each theory was proportional to your credence in that theory. Now imagine that these delegates negotiate with each other, advocating on behalf of their respective moral theories, until eventually the parliament reaches a decision by the delegates voting on the available options. This would provide a novel approach to decision-making under moral uncertainty that may avoid some of the problems that beset the others, and it may even provide a new framework for thinking about moral uncertainty more broadly.


Here, we endorse a common-sense approach to the question of scale which has much in common with standard decision-theoretic conventions. The suggestion is that one should convene Moral Parliament for those decision-situations to which it is intuitively appropriate, such as those involving non-trivial moral stakes, where the possible options are relatively well-defined, and so on. Normatively speaking, if Moral Parliament is the right approach to take to moral uncertainty, then it may also be right to apply it to all decision-situations (however this is defined). But practically speaking, this would be very difficult to achieve. This move has essentially the same implications as the approach of sidestepping the question but comes with a positive endorsement of Moral Parliament’s application to ‘the kinds of decision-situations typically described in papers on moral uncertainty’. This is the sense in which the common-sense approach resembles standard decision-theoretic conventions. 

Tuesday, July 20, 2021

Morally Motivated Networked Harassment as Normative Reinforcement

Marwick, A. E. (2021). 
Social Media + Society. 


While online harassment is recognized as a significant problem, most scholarship focuses on descriptions of harassment and its effects. We lack explanations of why people engage in online harassment beyond simple bias or dislike. This article puts forth an explanatory model where networked harassment on social media functions as a mechanism to enforce social order. Drawing from examples of networked harassment taken from qualitative interviews with people who have experienced harassment (n = 28) and Trust & Safety workers at social platforms (n = 9), the article builds on Brady, Crockett, and Bavel’s model of moral contagion to explore how moral outrage is used to justify networked harassment on social media. In morally motivated networked harassment, a member of a social network or online community accuses a target of violating their network’s norms, triggering moral outrage. Network members send harassing messages to the target, reinforcing their adherence to the norm and signaling network membership. Frequently, harassment results in the accused self-censoring and thus regulates speech on social media. Neither platforms nor legal regulations protect against this form of harassment. This model explains why people participate in networked harassment and suggests possible interventions to decrease its prevalence.

From the Conclusion

Ultimately, conceptualizing harassment as morally motivated and understanding it as a technique of norm reinforcement explains why people participate in it, a necessary step to decreasing it. This model may open creative solutions to harassment and content moderation. MMNH also recognizes that harassment, while more endemic to minorized communities, may be experienced by people from a wide variety of identities and political commitments, suggesting many possibilities for future research. Current technical and legal models of harassment do not protect against networked harassment; by providing a new model, I hope to contribute to lessening its prevalence.

Monday, July 19, 2021

Non-consensual personified sexbots: an intrinsic wrong

Lancaster, K. 
Ethics Inf Technol (2021). 


Humanoid robots used for sexual purposes (sexbots) are beginning to look increasingly lifelike. It is possible for a user to have a bespoke sexbot created which matches their exact requirements in skin pigmentation, hair and eye colour, body shape, and genital design. This means that it is possible—and increasingly easy—for a sexbot to be created which bears a very high degree of resemblance to a particular person. There is a small but steadily increasing literature exploring some of the ethical issues surrounding sexbots, however sexbots made to look like particular people is something which, as yet, has not been philosophically addressed in the literature. In this essay I argue that creating a lifelike sexbot to represent and resemble someone is an act of sexual objectification which morally requires consent, and that doing so without the person’s consent is intrinsically wrong. I consider two sexbot creators: Roy and Fred. Roy creates a sexbot of Katie with her consent, and Fred creates a sexbot of Jane without her consent. I draw on the work of Alan Goldman, Rae Langton, and Martha Nussbaum in particular to demonstrate that creating a sexbot of a particular person requires consent if it is to be intrinsically permissible.

From the Conclusion

Although sexbots may bring about a multitude of negative consequences for individuals and society, I have set these aside in order to focus on the intrinsically wrong act of creating a personified sexbot without the consent of the human subject. I have maintained that creating a personified sexbot is an act of sexual objectification directed towards that particular person which may or may not be permissible, depending on whether the human subject’s consent was obtained. Using Nussbaum’s Kantian-inspired argument, I have shown that non-consensually sexbotifying a human subject involves using them merely as a means, which is intrinsically wrong. Meanwhile, in a sexbotification case where the human subject’s prior consent is obtained, she has not been intrinsically wronged by the creation of the sexbot because she has not been used merely as a means to an end. With personified sexbots, consent of the human subject is a moral prerequisite, and is transformative when obtained. In other words, in cases of non-consensual sexbotification, the lack of consent is the wrong-making feature of the act. Even if it were the case that creating any sexbot is intrinsically wrong because it objectifies women qua women, it is still right to maintain that sexbotifying a woman without her consent is an additional intrinsic wrong.

Sunday, July 18, 2021

‘They’re Not True Humans’: Beliefs About Moral Character Drive Categorical Denials of Humanity

Phillips, B. (2021, May 29). 


In examining the cognitive processes that drive dehumanization, laboratory-based research has focused on non-categorical denials of humanity. Here, we examine the conditions under which people are willing to categorically deny that someone else is human. In doing so, we argue that people harbor a dual character concept of humanity. Research has found that dual character concepts have two independent sets of criteria for their application, one of which is normative. Across four experiments, we found evidence that people deploy one criterion according to which being human is a matter of being a Homo sapiens; as well as a normative criterion according to which being human is a matter of possessing a deep-seated commitment to do the morally right thing. Importantly, we found that people are willing to affirm that someone is human in the species sense, but deny that they are human in the normative sense, and vice versa. These findings suggest that categorical denials of humanity are not confined to extreme cases outside the laboratory. They also suggest a solution to “the paradox of dehumanization.”


6.2.The paradox of dehumanization 

The findings reported here also suggest a solution to the paradox of dehumanization. Recall that in paradigmatic cases of dehumanization, such as the Holocaust, the perpetrators tend to attribute certain uniquely human traits to their victims. For example, the Nazis frequently characterized Jewish people as criminals and traitors. They also treated them as moral agents, and subjected them to severe forms of punishment and humiliation (see Gutman and Berenbaum, 1998). Criminality, treachery, and moral agency are not capacities that we tend to attribute to nonhuman animals.  Thus, can we really say that the Nazis thought of their victims as nonhuman? In responding to this paradox, some theorists have suggested that the perpetrators in these paradigmatic cases do not, in fact, think of their victims as nonhuman(see Appiah, 2008; Bloom, 2017; Manne, 2016, 2018, chapter 5; Over, 2020; Rai et al., 2017).Other theorists have suggested that the perpetrators harbor inconsistent representations of their victims, simultaneously thinking of them as both human and subhuman (Smith, 2016, 2020).Our findings suggest a third possibility: namely, that the perpetrators harbor a dual character concept of humanity, categorizing their victims as human in one sense, but denying that they are human in another sense. For example, it is true that theNazis attributed certain uniquely human traits to their victims, such as criminality. However, when categorizing their victims as evil criminals, the Nazis may have been thinking of them as nonhuman in the normative sense, while recognizing them as human in the species sense (for a relevant discussion, see Steizinger, 2018). This squares away with the fact that when the Nazis likened Jewish people to certain animals, such as rats, this often took on a moralizing tone. For example, in an antisemitic book entitled The Eternal Jew (Nachfolger, 1937), Jewish neighborhoods in Berlin were described as “breeding grounds of criminal and political vermin.” Similarly, when the Nazis referred toJews as “subhumans,” they often characterized them as bad moral agents. For example, as was mentioned above, Goebbels described Bolshevism as “the declaration of war by Jewish-led international subhumans against culture itself.”Similarly, in one 1943 Nazi pamphlet, Marxist values are described as appealing to subhumans, while liberalist values are described as “allowing the triumph of subhumans” (Anonymous, 1943, chapter 1).

Saturday, July 17, 2021

Bad machines corrupt good morals

K√∂bis, N., Bonnefon, J F. & Rahwan, I. 
Nat Hum Behav 5, 679–685 (2021). 


As machines powered by artificial intelligence (AI) influence humans’ behaviour in ways that are both like and unlike the ways humans influence each other, worry emerges about the corrupting power of AI agents. To estimate the empirical validity of these fears, we review the available evidence from behavioural science, human–computer interaction and AI research. We propose four main social roles through which both humans and machines can influence ethical behaviour. These are: role model, advisor, partner and delegate. When AI agents become influencers (role models or advisors), their corrupting power may not exceed the corrupting power of humans (yet). However, AI agents acting as enablers of unethical behaviour (partners or delegates) have many characteristics that may let people reap unethical benefits while feeling good about themselves, a potentially perilous interaction. On the basis of these insights, we outline a research agenda to gain behavioural insights for better AI oversight.

From the end of the article

Another policy-relevant research question is how to integrate awareness for the corrupting force of AI tools into the innovation process. New AI tools hit the market on a daily basis. The current approach of ‘innovate first, ask for forgiveness later’ has caused considerable backlash and even demands for banning AI technology such as facial recognition. As a consequence, ethical considerations must enter the innovation and publication process of AI developments. Current efforts to develop ethical labels for responsible and crowdsourcing citizens’ preferences about ethical are mostly concerned about the direct unethical consequences of AI behaviour and not its influence on the ethical conduct of the humans who interact with and through it. A thorough experimental approach to responsible AI will need to expand concerns about direct AI-induced harm to concerns about how bad machines can corrupt good morals.

Friday, July 16, 2021

“False positive” emotions, responsibility, and moral character

Anderson, R.A., et al.
Volume 214, September 2021, 104770


People often feel guilt for accidents—negative events that they did not intend or have any control over. Why might this be the case? Are there reputational benefits to doing so? Across six studies, we find support for the hypothesis that observers expect “false positive” emotions from agents during a moral encounter – emotions that are not normatively appropriate for the situation but still trigger in response to that situation. For example, if a person accidentally spills coffee on someone, most normative accounts of blame would hold that the person is not blameworthy, as the spill was accidental. Self-blame (and the guilt that accompanies it) would thus be an inappropriate response. However, in Studies 1–2 we find that observers rate an agent who feels guilt, compared to an agent who feels no guilt, as a better person, as less blameworthy for the accident, and as less likely to commit moral offenses. These attributions of moral character extend to other moral emotions like gratitude, but not to nonmoral emotions like fear, and are not driven by perceived differences in overall emotionality (Study 3). In Study 4, we demonstrate that agents who feel extremely high levels of inappropriate (false positive) guilt (e.g., agents who experience guilt but are not at all causally linked to the accident) are not perceived as having a better moral character, suggesting that merely feeling guilty is not sufficient to receive a boost in judgments of character. In Study 5, using a trust game design, we find that observers are more willing to trust others who experience false positive guilt compared to those who do not. In Study 6, we find that false positive experiences of guilt may actually be a reliable predictor of underlying moral character: self-reported predicted guilt in response to accidents negatively correlates with higher scores on a psychopathy scale.

From the General Discussion

It seems reasonable to think that there would be some benefit to communicating these moral emotions as a signal of character, and to being able to glean information about the character of others from observations of their emotional responses. If a propensity to feel guilt makes it more likely that a person is cooperative and trustworthy, observers would need to discriminate between people who are and are not prone to guilt. Guilt could therefore serve as an effective regulator of moral behavior in others in its role as a reliable signal of good character.  This account is consistent with theoretical accounts of emotional expressions more generally, either in the face, voice, or body, as a route by which observers make inferences about a person’s underlying dispositions (Frank, 1988). Our results suggest that false positive emotional responses specifically may provide an additional, and apparently informative, source of evidence for one’s propensity toward moral emotions and moral behavior.

Thursday, July 15, 2021

Overconfidence in news judgments is associated with false news susceptibility

B. A. Lyons, et al.
PNAS, Jun 2021, 118 (23) e2019527118
DOI: 10.1073/pnas.2019527118


We examine the role of overconfidence in news judgment using two large nationally representative survey samples. First, we show that three in four Americans overestimate their relative ability to distinguish between legitimate and false news headlines; respondents place themselves 22 percentiles higher than warranted on average. This overconfidence is, in turn, correlated with consequential differences in real-world beliefs and behavior. We show that overconfident individuals are more likely to visit untrustworthy websites in behavioral data; to fail to successfully distinguish between true and false claims about current events in survey questions; and to report greater willingness to like or share false content on social media, especially when it is politically congenial. In all, these results paint a worrying picture: The individuals who are least equipped to identify false news content are also the least aware of their own limitations and, therefore, more susceptible to believing it and spreading it further.


Although Americans believe the confusion caused by false news is extensive, relatively few indicate having seen or shared it—a discrepancy suggesting that members of the public may not only have a hard time identifying false news but fail to recognize their own deficiencies at doing so. If people incorrectly see themselves as highly skilled at identifying false news, they may unwittingly participate in its circulation. In this large-scale study, we show that not only is overconfidence extensive, but it is also linked to both self-reported and behavioral measures of false news website visits, engagement, and belief. Our results suggest that overconfidence may be a crucial factor for explaining how false and low-quality information spreads via social media.

Wednesday, July 14, 2021

Popularity is linked to neural coordination: Neural evidence for an Anna Karenina principle in social networks

Baek, E. C.,  et al. (2021)


People differ in how they attend to, interpret, and respond to their surroundings. Convergent processing of the world may be one factor that contributes to social connections between individuals. We used neuroimaging and network analysis to investigate whether the most central individuals in their communities (as measured by in-degree centrality, a notion of popularity) process the world in a particularly normative way. More central individuals had exceptionally similar neural responses to their peers and especially to each other in brain regions associated with high-level interpretations and social cognition (e.g., in the default-mode network), whereas less-central individuals exhibited more idiosyncratic responses. Self-reported enjoyment of and interest in stimuli followed a similar pattern, but accounting for these data did not change our main results. These findings suggest an “Anna Karenina principle” in social networks: Highly-central individuals process the world in exceptionally similar ways, whereas less-central individuals process the world in idiosyncratic ways.


What factors distinguish highly-central individuals in social networks? Our results are consistent with the notion that popular individuals (who are central in their social networks) process the world around them in normative ways, whereas unpopular individuals process the world around them idiosyncratically. Popular individuals exhibited greater mean neural similarity with their peers than unpopular individuals in several regions of the brain, including ones in which similar neural responding has been associated with shared higher-level interpretations of events and social cognition (e.g., regions of the default mode network) while viewing dynamic, naturalistic stimuli. Our results indicate that the relationship between popularity and neural similarity follows anAnna Karenina principle. Specifically, we observed that popular individuals were very similar to each other in their neural responses, whereas unpopular individuals were dissimilar both to each other and to their peers’ normative way of processing the world.  Our findings suggest that highly-central people process and respond to the world around them in a manner that allows them to relate to and connect with many of their peers and that less-central people exhibit idiosyncrasies that may result in greater difficulty in relating to others.

Tuesday, July 13, 2021

Valence framing effects on moral judgments: A meta-analysis

McDonald, K., et al.
Volume 212, July 2021, 104703


Valence framing effects occur when participants make different choices or judgments depending on whether the options are described in terms of their positive outcomes (e.g. lives saved) or their negative outcomes (e.g. lives lost). When such framing effects occur in the domain of moral judgments, they have been taken to cast doubt on the reliability of moral judgments and raise questions about the extent to which these moral judgments are self-evident or justified in themselves. One important factor in this debate is the magnitude and variability of the extent to which differences in framing presentation impact moral judgments. Although moral framing effects have been studied by psychologists, the overall strength of these effects pooled across published studies is not yet known. Here we conducted a meta-analysis of 109 published articles (contributing a total of 146 unique experiments with 49,564 participants) involving valence framing effects on moral judgments and found a moderate effect (d = 0.50) among between-subjects designs as well as several moderator variables. While we find evidence for publication bias, statistically accounting for publication bias attenuates, but does not eliminate, this effect (d = 0.22). This suggests that the magnitude of valence framing effects on moral decisions is small, yet significant when accounting for publication bias.

Monday, July 12, 2021

Workplace automation without achievement gaps: a reply to Danaher and Nyholm

Tigard, D.W. 
AI Ethics (2021). 


In a recent article in this journal, John Danaher and Sven Nyholm raise well-founded concerns that the advances in AI-based automation will threaten the values of meaningful work. In particular, they present a strong case for thinking that automation will undermine our achievements, thereby rendering our work less meaningful. It is also claimed that the threat to achievements in the workplace will open up ‘achievement gaps’—the flipside of the ‘responsibility gaps’ now commonly discussed in technology ethics. This claim, however, is far less worrisome than the general concerns for widespread automation, namely because it rests on several conceptual ambiguities. With this paper, I argue that although the threat to achievements in the workplace is problematic and calls for policy responses of the sort Danaher and Nyholm outline, when framed in terms of responsibility, there are no ‘achievement gaps’.

From the Conclusion

In closing, it is worth stopping to ask: Who exactly is the primary subject of “harm” (broadly speaking) in the supposed gap scenarios? Typically, in cases of responsibility gaps, the harm is seen as falling upon the person inclined to respond (usually with blame) and finding no one to respond to. This is often because they seek apologies or some sort of remuneration, and as we can imagine, it sets back their interests when such demands remain unfulfilled. But what about cases of achievement gaps? If we want to draw truly close analogies between the two scenarios, we would consider the subject of harm to be the person inclined to respond with praise and finding no one to praise. And perhaps there is some degree of disappointment here, but it hardly seems to be a worrisome kind of experience for that person. With this in mind, we might say there is yet another mismatch between responsibility gaps and achievement gaps. Nevertheless, on the account of Danaher and Nyholm, the harm is seen as falling upon the humans who miss out on achieving something in the workplace. But on that picture, we run into a sort of non-identity problem—for as soon as we identify the subjects of this kind of harm, we thereby affirm that it is not fitting to praise them for the workplace achievement, and so they cannot really be harmed in this way.

Sunday, July 11, 2021

It just feels right: an account of expert intuition

Fridland, E., & Stichter, M. 
Synthese (2020). 


One of the hallmarks of virtue is reliably acting well. Such reliable success presupposes that an agent (1) is able to recognize the morally salient features of a situation, and the appropriate response to those features and (2) is motivated to act on this knowledge without internal conflict. Furthermore, it is often claimed that the virtuous person can do this (3) in a spontaneous or intuitive manner. While these claims represent an ideal of what it is to have a virtue, it is less clear how to make good on them. That is, how is it actually possible to spontaneously and reliably act well? In this paper, we will lay out a framework for understanding how it is that one could reliably act well in an intuitive manner. We will do this by developing the concept of an action schema, which draws on the philosophical and psychological literature on skill acquisition and self-regulation. In short, we will give an account of how self-regulation, grounded in skillful structures, can allow for the accurate intuitions and flexible expertise required for virtue. While our primary goal in this paper is to provide a positive theory of how virtuous intuitions might be accounted for, we also take ourselves to be raising the bar for what counts as an explanation of reliable and intuitive action in general.


By thinking of skill and expertise as sophisticated forms of self-regulation, we are able to get a handle on intuition, generally, and on the ways in which reliably accurate intuition may develop in virtue, specifically. This gives us a way of explaining both the accuracy and immediacy of the virtuous person’s perception and intuitive responsiveness to a situation and it also gives us further reason to prefer a virtue as skill account of virtue. Moreover, such an approach gives us the resources to explain with some rigor and precision, the ways in which expert intuition can be accounted for, by appeal to action schemas. Lastly, our approach provides reason to think that expert intuition in the realm of virtue can indeed develop over time and with practice in a way that is flexible, controlled and intelligent. It lends credence to the view that virtue is learned and that we can act reliably and well by grounding our actions in expert intuition.

Saturday, July 10, 2021

Is Burnout Depression by Another Name?

Bianchi R, Verkuilen J, Schonfeld IS, et al. 
Clinical Psychological Science. March 2021. 


There is no consensus on whether burnout constitutes a depressive condition or an original entity requiring specific medical and legal recognition. In this study, we examined burnout–depression overlap using 14 samples of individuals from various countries and occupational domains (N = 12,417). Meta-analytically pooled disattenuated correlations indicated (a) that exhaustion—burnout’s core—is more closely associated with depressive symptoms than with the other putative dimensions of burnout (detachment and efficacy) and (b) that the exhaustion–depression association is problematically strong from a discriminant validity standpoint (r = .80). The overlap of burnout’s core dimension with depression was further illuminated in 14 exploratory structural equation modeling bifactor analyses. Given their consistency across countries, languages, occupations, measures, and methods, our results offer a solid base of evidence in support of the view that burnout problematically overlaps with depression. We conclude by outlining avenues of research that depart from the use of the burnout construct.


In essence, the core feature of burnout is depression.  However, burnout is not as debilitating as depression.

Friday, July 9, 2021

Why It’s Time To Modernize Your Ethics Hotline

Claire Schmidt
Originally posted 18 Jun 21

Traditional whistleblower hotlines are going to be a thing of the past.

They certainly served a purpose and pioneered a way for employees to report wrongdoing at their companies confidentially. But the reasons are stacking up against them as to why they’re no longer serving companies and employees in 2021. And if companies continue to use them, they need to realize that issues or concerns may go unreported because employees don’t want to use that channel to report.

After all, the function of a whistleblower hotline is to encourage employees to report any wrongdoing they see in the workplace through a confidential channel, which means that the channels for reporting should get an upgrade.

But there are deeper reasons why issues remain unreported — and it goes beyond just offering a hotline to use. Today, companies need to give their employees better ways to report wrongdoing, as well as tell them the value of why they should do so. Otherwise, companies won’t hear about the full extent of wrongdoing happening in the workplace, whatever channel they provide.

The Evolution Of Workplace Reporting Channels

Whistleblower or ethics hotlines were initially that: a phone number — because that was the technology at the time — that employees could anonymously call to report wrongdoings at a company. The Sarbanes-Oxley Act of 2002 mandated that companies set up a method for “the confidential, anonymous submission by employees of the issuer of concerns regarding questionable accounting or auditing matters.”

Thursday, July 8, 2021

Free Will and Neuroscience: Decision Times and the Point of No Return

Alfred Mele
In Free Will, Causality, & Neuroscience
Chapter 4

Here are some excerpts:

Decisions to do things, as I conceive of them, are momentary actions of forming an intention to do them. For example, to decide to flex my right wrist now is to perform a (nonovert) action of forming an intention to flex it now (Mele 2003, ch. 9). I believe that Libet understands decisions in the same way. Some of our decisions and intentions are for the nonimmediate future and others are not. I have an intention today to fly to Brussels three days from now, and I have an intention now to click my “save” button now. The former intention is aimed at action three days in the future. The latter intention is about what to do now. I call intentions of these kinds, respectively, distal and proximal intentions (Mele 1992, pp. 143–44, 158, 2009, p. 10), and I make the same distinction in the sphere of decisions to act. Libet studies proximal intentions (or decisions or urges) in particular.


Especially in the case of the study now under discussion, readers unfamiliar with Libet-style experiments may benefit from a short description of my own experience as a participant in such an experiment (see Mele 2009, pp. 34–36). I had just three things to do: watch a Libet clock with a view to keeping track of when I first became aware of something like a proximal urge, decision, or intention to flex; flex whenever I felt like it (many times over the course of the experiment); and report, after each flex, where I believed the hand was on the clock at the moment of first awareness. (I reported this belief by moving a cursor to a point on the clock. The clock was very fast; it made a complete revolution in about 2.5 seconds.) Because I did not experience any proximal urges, decisions, or intentions to flex, I hit on the strategy of saying “now!” silently to myself just before beginning to flex. This is the mental event that I tried to keep track of with the assistance of the clock. I thought of the “now!” as shorthand for the imperative “flex now!” – something that may be understood as an expression of a proximal decision to flex.

Why did I say “now!” exactly when I did? On any given trial, I had before me a string of equally good moments for a “now!” – saying, and I arbitrarily picked one of the moments. 3 But what led me to pick the moment I picked? The answer offered by Schurger et al. is that random noise crossed a decision threshold then. And they locate the time of the crossing very close to the onset of muscle activity – about 100 ms before it (pp. E2909, E2912). They write: “The reason we do not experience the urge to move as having happened earlier than about 200 ms before movement onset [referring to Libet’s partipants’ reported W time] is simply because, at that time, the neural decision to move (crossing the decision threshold) has not yet been made” (E2910). If they are right, this is very bad news for Libet. His claim is that, in his experiments, decisions are made well before the average reported W time: −200 ms. (In a Libet-style experiment conducted by Schurger et al., average reported W time is −150 ms [p. E2905].) As I noted, if relevant proximal decisions are not made before W, Libet’s argument for the claim that they are made unconsciously fails.

Wednesday, July 7, 2021

“False positive” emotions, responsibility, and moral character

Anderson, R. A., et al.
Volume 214, September 2021, 104770


People often feel guilt for accidents—negative events that they did not intend or have any control over. Why might this be the case? Are there reputational benefits to doing so? Across six studies, we find support for the hypothesis that observers expect “false positive” emotions from agents during a moral encounter – emotions that are not normatively appropriate for the situation but still trigger in response to that situation. For example, if a person accidentally spills coffee on someone, most normative accounts of blame would hold that the person is not blameworthy, as the spill was accidental. Self-blame (and the guilt that accompanies it) would thus be an inappropriate response. However, in Studies 1–2 we find that observers rate an agent who feels guilt, compared to an agent who feels no guilt, as a better person, as less blameworthy for the accident, and as less likely to commit moral offenses. These attributions of moral character extend to other moral emotions like gratitude, but not to nonmoral emotions like fear, and are not driven by perceived differences in overall emotionality (Study 3). In Study 4, we demonstrate that agents who feel extremely high levels of inappropriate (false positive) guilt (e.g., agents who experience guilt but are not at all causally linked to the accident) are not perceived as having a better moral character, suggesting that merely feeling guilty is not sufficient to receive a boost in judgments of character. In Study 5, using a trust game design, we find that observers are more willing to trust others who experience false positive guilt compared to those who do not. In Study 6, we find that false positive experiences of guilt may actually be a reliable predictor of underlying moral character: self-reported predicted guilt in response to accidents negatively correlates with higher scores on a psychopathy scale.

General discussion

Collectively, our results support the hypothesis that false positive moral emotions are associated with both judgments of moral character and traits associated with moral character. We consistently found that observers use an agent's false positive experience of moral emotions (e.g., guilt, gratitude) to infer their underlying moral character, their social likability, and to predict both their future emotional responses and their future moral behavior. Specifically, we found that observers judge an agent who experienced “false positive” guilt (in response to an accidental harm) as a more moral person, more likeable, less likely to commit future moral infractions, and more trustworthy than an agent who experienced no guilt. Our results help explain the second “puzzle” regarding guilt for accidental actions (Kamtekar & Nichols, 2019). Specifically, one reason that observers may find an accidental agent less blameworthy, and yet still be wary if the agent does not feel guilt, is that such false positive guilt provides an important indicator of that agent's underlying character.

Tuesday, July 6, 2021

On the Origins of Diversity in Social Behavior

Young, L.J. & Zhang, Q.
Japanese Journal of Animal Psychology


Here we discuss the origins of diversity in social behavior by highlighting research using the socially monogamous prairie vole. Prairie voles display a rich social behavioral repertoire involving pair bonding and consoling behavior that are not observed in typical laboratory species. Oxytocin and vasopressin play critical roles in regulating pair bonding and consoling behavior. Oxytocin and vasopressin receptors show remarkable diversity in expression patterns both between and within species. Receptor expression patterns are associated with species differences in social behaviors. Variations in receptor genes have been linked to individual variation in expression patterns. We propose that "evolvability" in the oxytocin and vasopressin receptor genes allows for the repurposing of ancient maternal and territorial circuits to give rise to novel social behaviors such as pair bonding, consoling and selective aggression. We further propose that the evolvability of these receptor genes is due to their transcriptional sensitivity to genomic variation. This model provides a foundation for investigating the molecular mechanisms giving rise to the remarkable diversity in social behaviors found in vertebrates.

While this hypothesis remains to be tested, we believe this transcriptional flexibility is key to the origin of diversity in social behavior, and enables rapid social behavioral adaptation through natural selection, and
contributes to the remarkable diversity in social and reproductive behaviors in the animal kingdom.

Monday, July 5, 2021

When Do Robots have Free Will? Exploring the Relationships between (Attributions of) Consciousness and Free Will

Nahmias, E., Allen, C. A., & Loveall, B.
In Free Will, Causality, & Neuroscience
Chapter 3

Imagine that, in the future, humans develop the technology to construct humanoid robots with very sophisticated computers instead of brains and with bodies made out of metal, plastic, and synthetic materials. The robots look, talk, and act just like humans and are able to integrate into human society and to interact with humans across any situation. They work in our offices and our restaurants, teach in our schools, and discuss the important matters of the day in our bars and coffeehouses. How do you suppose you’d respond to one of these robots if you were to discover them attempting to steal your wallet or insulting your friend? Would you regard them as free and morally responsible agents, genuinely deserving of blame and punishment?

If you’re like most people, you are more likely to regard these robots as having free will and being morally responsible if you believe that they are conscious rather than non-conscious. That is, if you think that the robots actually experience sensations and emotions, you are more likely to regard them as having free will and being morally responsible than if you think they simply behave like humans based on their internal programming but with no conscious experiences at all. But why do many people have this intuition? Philosophers and scientists typically assume that there is a deep connection between consciousness and free will, but few have developed theories to explain this connection. To the extent that they have, it’s typically via some cognitive capacity thought to be important for free will, such as reasoning or deliberation, that consciousness is supposed to enable or bolster, at least in humans. But this sort of connection between consciousness and free will is relatively weak. First, it’s contingent; given our particular cognitive architecture, it holds, but if robots or aliens could carry out the relevant cognitive capacities without being conscious, this would suggest that consciousness is not constitutive of, or essential for, free will. Second, this connection is derivative, since the main connection goes through some capacity other than consciousness. Finally, this connection does not seem to be focused on phenomenal consciousness (first-person experience or qualia), but instead, on access consciousness or self-awareness (more on these distinctions below).

From the Conclusion

In most fictional portrayals of artificial intelligence and robots (such as Blade Runner, A.I., and Westworld), viewers tend to think of the robots differently when they are portrayed in a way that suggests they express and feel emotions. No matter how intelligent or complex their behavior, they do not come across as free and autonomous until they seem to care about what happens to them (and perhaps others). Often this is portrayed by their showing fear of their own death or others, or expressing love, anger, or joy. Sometimes it is portrayed by the robots’ expressing reactive attitudes, such as indignation, or our feeling such attitudes towards them. Perhaps the authors of these works recognize that the robots, and their stories, become most interesting when they seem to have free will, and people will see them as free when they start to care about what happens to them, when things really matter to them, which results from their experiencing the actual (and potential) outcomes of their actions.

Sunday, July 4, 2021

Understanding Side-Effect Intentionality Asymmetries: Meaning, Morality, or Attitudes and Defaults?

Laurent SM, Reich BJ, Skorinko JLM. 
Personality and Social Psychology Bulletin. 


People frequently label harmful (but not helpful) side effects as intentional. One proposed explanation for this asymmetry is that moral considerations fundamentally affect how people think about and apply the concept of intentional action. We propose something else: People interpret the meaning of questions about intentionally harming versus helping in fundamentally different ways. Four experiments substantially support this hypothesis. When presented with helpful (but not harmful) side effects, people interpret questions concerning intentional helping as literally asking whether helping is the agents’ intentional action or believe questions are asking about why agents acted. Presented with harmful (but not helpful) side effects, people interpret the question as asking whether agents intentionally acted, knowing this would lead to harm. Differences in participants’ definitions consistently helped to explain intentionality responses. These findings cast doubt on whether side-effect intentionality asymmetries are informative regarding people’s core understanding and application of the concept of intentional action.

From the Discussion

Second, questions about intentionality of harm may focus people on two distinct elements presented in the vignette: the agent’s  intentional action  (e.g., starting a profit-increasing program) and the harmful secondary outcome he knows this goal-directed action will cause. Because the concept of intentionality is most frequently applied to actions rather than consequences of actions (Laurent, Clark, & Schweitzer, 2015), reframing the question as asking about an intentional action undertaken with foreknowledge of harm has advantages. It allows consideration of key elements from the story and is responsive to what people may feel is at the heart of the question: “Did the chairman act intentionally, knowing this would lead to harm?” Notably, responses to questions capturing this idea significantly mediated intentionality responses in each experiment presented here, whereas other variables tested failed to consistently do so.