Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Morality. Show all posts
Showing posts with label Morality. Show all posts

Saturday, June 24, 2023

The Darwinian Argument for Worrying About AI

Dan Hendrycks
Time.com
Originally posted 31 May 23

Here is an excerpt:

In the biological realm, evolution is a slow process. For humans, it takes nine months to create the next generation and around 20 years of schooling and parenting to produce fully functional adults. But scientists have observed meaningful evolutionary changes in species with rapid reproduction rates, like fruit flies, in fewer than 10 generations. Unconstrained by biology, AIs could adapt—and therefore evolve—even faster than fruit flies do.

There are three reasons this should worry us. The first is that selection effects make AIs difficult to control. Whereas AI researchers once spoke of “designing” AIs, they now speak of “steering” them. And even our ability to steer is slipping out of our grasp as we let AIs teach themselves and increasingly act in ways that even their creators do not fully understand. In advanced artificial neural networks, we understand the inputs that go into the system, but the output emerges from a “black box” with a decision-making process largely indecipherable to humans.

Second, evolution tends to produce selfish behavior. Amoral competition among AIs may select for undesirable traits. AIs that successfully gain influence and provide economic value will predominate, replacing AIs that act in a more narrow and constrained manner, even if this comes at the cost of lowering guardrails and safety measures. As an example, most businesses follow laws, but in situations where stealing trade secrets or deceiving regulators is highly lucrative and difficult to detect, a business that engages in such selfish behavior will most likely outperform its more principled competitors.

Selfishness doesn’t require malice or even sentience. When an AI automates a task and leaves a human jobless, this is selfish behavior without any intent. If competitive pressures continue to drive AI development, we shouldn’t be surprised if they act selfishly too.

The third reason is that evolutionary pressure will likely ingrain AIs with behaviors that promote self-preservation. Skeptics of AI risks often ask, “Couldn’t we just turn the AI off?” There are a variety of practical challenges here. The AI could be under the control of a different nation or a bad actor. Or AIs could be integrated into vital infrastructure, like power grids or the internet. When embedded into these critical systems, the cost of disabling them may prove too high for us to accept since we would become dependent on them. AIs could become embedded in our world in ways that we can’t easily reverse. But natural selection poses a more fundamental barrier: we will select against AIs that are easy to turn off, and we will come to depend on AIs that we are less likely to turn off.

These strong economic and strategic pressures to adopt the systems that are most effective mean that humans are incentivized to cede more and more power to AI systems that cannot be reliably controlled, putting us on a pathway toward being supplanted as the earth’s dominant species. There are no easy, surefire solutions to our predicament.

Friday, June 16, 2023

ChatGPT Is a Plagiarism Machine

Joseph Keegin
The Chronicle
Originally posted 23 MAY 23

Here is an excerpt:

A meaningful education demands doing work for oneself and owning the product of one’s labor, good or bad. The passing off of someone else’s work as one’s own has always been one of the greatest threats to the educational enterprise. The transformation of institutions of higher education into institutions of higher credentialism means that for many students, the only thing dissuading them from plagiarism or exam-copying is the threat of punishment. One obviously hopes that, eventually, students become motivated less by fear of punishment than by a sense of responsibility for their own education. But if those in charge of the institutions of learning — the ones who are supposed to set an example and lay out the rules — can’t bring themselves to even talk about a major issue, let alone establish clear and reasonable guidelines for those facing it, how can students be expected to know what to do?

So to any deans, presidents, department chairs, or other administrators who happen to be reading this, here are some humble, nonexhaustive, first-aid-style recommendations. First, talk to your faculty — especially junior faculty, contingent faculty, and graduate-student lecturers and teaching assistants — about what student writing has looked like this past semester. Try to find ways to get honest perspectives from students, too; the ones actually doing the work are surely frustrated at their classmates’ laziness and dishonesty. Any meaningful response is going to demand knowing the scale of the problem, and the paper-graders know best what’s going on. Ask teachers what they’ve seen, what they’ve done to try to mitigate the possibility of AI plagiarism, and how well they think their strategies worked. Some departments may choose to take a more optimistic approach to AI chatbots, insisting they can be helpful as a student research tool if used right. It is worth figuring out where everyone stands on this question, and how best to align different perspectives and make allowances for divergent opinions while holding a firm line on the question of plagiarism.

Second, meet with your institution’s existing honor board (or whatever similar office you might have for enforcing the strictures of academic integrity) and devise a set of standards for identifying and responding to AI plagiarism. Consider simplifying the procedure for reporting academic-integrity issues; research AI-detection services and software, find one that works best for your institution, and make sure all paper-grading faculty have access and know how to use it.

Lastly, and perhaps most importantly, make it very, very clear to your student body — perhaps via a firmly worded statement — that AI-generated work submitted as original effort will be punished to the fullest extent of what your institution allows. Post the statement on your institution’s website and make it highly visible on the home page. Consider using this challenge as an opportunity to reassert the basic purpose of education: to develop the skills, to cultivate the virtues and habits of mind, and to acquire the knowledge necessary for leading a rich and meaningful human life.

Tuesday, June 13, 2023

Using the Veil of Ignorance to align AI systems with principles of justice

Weidinger, L. McKee, K.R., et al. (2023).
PNAS, 120(18), e2213709120

Abstract

The philosopher John Rawls proposed the Veil of Ignorance (VoI) as a thought experiment to identify fair principles for governing a society. Here, we apply the VoI to an important governance domain: artificial intelligence (AI). In five incentive-compatible studies (N = 2, 508), including two preregistered protocols, participants choose principles to govern an Artificial Intelligence (AI) assistant from behind the veil: that is, without knowledge of their own relative position in the group. Compared to participants who have this information, we find a consistent preference for a principle that instructs the AI assistant to prioritize the worst-off. Neither risk attitudes nor political preferences adequately explain these choices. Instead, they appear to be driven by elevated concerns about fairness: Without prompting, participants who reason behind the VoI more frequently explain their choice in terms of fairness, compared to those in the Control condition. Moreover, we find initial support for the ability of the VoI to elicit more robust preferences: In the studies presented here, the VoI increases the likelihood of participants continuing to endorse their initial choice in a subsequent round where they know how they will be affected by the AI intervention and have a self-interested motivation to change their mind. These results emerge in both a descriptive and an immersive game. Our findings suggest that the VoI may be a suitable mechanism for selecting distributive principles to govern AI.

Significance

The growing integration of Artificial Intelligence (AI) into society raises a critical question: How can principles be fairly selected to govern these systems? Across five studies, with a total of 2,508 participants, we use the Veil of Ignorance to select principles to align AI systems. Compared to participants who know their position, participants behind the veil more frequently choose, and endorse upon reflection, principles for AI that prioritize the worst-off. This pattern is driven by increased consideration of fairness, rather than by political orientation or attitudes to risk. Our findings suggest that the Veil of Ignorance may be a suitable process for selecting principles to govern real-world applications of AI.

From the Discussion section

What do these findings tell us about the selection of principles for AI in the real world? First, the effects we observe suggest that—even though the VoI was initially proposed as a mechanism to identify principles of justice to govern society—it can be meaningfully applied to the selection of governance principles for AI. Previous studies applied the VoI to the state, such that our results provide an extension of prior findings to the domain of AI. Second, the VoI mechanism demonstrates many of the qualities that we want from a real-world alignment procedure: It is an impartial process that recruits fairness-based reasoning rather than self-serving preferences. It also leads to choices that people continue to endorse across different contexts even where they face a self-interested motivation to change their mind. This is both functionally valuable in that aligning AI to stable preferences requires less frequent updating as preferences change, and morally significant, insofar as we judge stable reflectively endorsed preferences to be more authoritative than their nonreflectively endorsed counterparts. Third, neither principle choice nor subsequent endorsement appear to be particularly affected by political affiliation—indicating that the VoI may be a mechanism to reach agreement even between people with different political beliefs. Lastly, these findings provide some guidance about what the content of principles for AI, selected from behind a VoI, may look like: When situated behind the VoI, the majority of participants instructed the AI assistant to help those who were least advantaged.

Sunday, June 4, 2023

We need to examine the beliefs of today’s tech luminaries

Anjana Ahuja
Financial Times
Originally posted 10 MAY 23

People who are very rich or very clever, or both, sometimes believe weird things. Some of these beliefs are captured in the acronym Tescreal. The letters represent overlapping futuristic philosophies — bookended by transhumanism and longtermism — favoured by many of AI’s wealthiest and most prominent supporters.

The label, coined by a former Google ethicist and a philosopher, is beginning to circulate online and usefully explains why some tech figures would like to see the public gaze trained on fuzzy future problems such as existential risk, rather than on current liabilities such as algorithmic bias. A fraternity that is ultimately committed to nurturing AI for a posthuman future may care little for the social injustices committed by their errant infant today.

As well as transhumanism, which advocates for the technological and biological enhancement of humans, Tescreal encompasses extropianism, a belief that science and technology will bring about indefinite lifespan; singularitarianism, the idea that an artificial superintelligence will eventually surpass human intelligence; cosmism, a manifesto for curing death and spreading outwards into the cosmos; rationalism, the conviction that reason should be the supreme guiding principle for humanity; effective altruism, a social movement that calculates how to maximally benefit others; and longtermism, a radical form of utilitarianism which argues that we have moral responsibilities towards the people who are yet to exist, even at the expense of those who currently do.

(cut, and the ending)

Gebru, along with others, has described such talk as fear-mongering and marketing hype. Many will be tempted to dismiss her views — she was sacked from Google after raising concerns over energy use and social harms linked to large language models — as sour grapes, or an ideological rant. But that glosses over the motivations of those running the AI show, a dazzling corporate spectacle with a plot line that very few are able to confidently follow, let alone regulate.

Repeated talk of a possible techno-apocalypse not only sets up these tech glitterati as guardians of humanity, it also implies an inevitability in the path we are taking. And it distracts from the real harms racking up today, identified by academics such as Ruha Benjamin and Safiya Noble. Decision-making algorithms using biased data are deprioritising black patients for certain medical procedures, while generative AI is stealing human labour, propagating misinformation and putting jobs at risk.

Perhaps those are the plot twists we were not meant to notice.


Friday, June 2, 2023

Is it good to feel bad about littering? Conflict between moral beliefs and behaviors for everyday transgressions

Schwartz, Stephanie A. and Inbar, Yoel
SSRN.
Originally posted 22 June 22

Abstract

People sometimes do things that they think are morally wrong. We investigate how actor’s perceptions of the morality of their own behaviors affects observers’ evaluations. In Study 1 (n = 302), we presented participants with six different descriptions of actors who routinely engaged in a morally questionable behavior and varied whether the actors thought the behavior was morally wrong. Actors who believed their behavior was wrong were seen as having better moral character, but their behavior was rated as more wrong. In Study 2 (n = 391) we investigated whether perceptions of actor metadesires were responsible for the effects of actor beliefs on judgments. We used the same stimuli and measures as in Study 1 but added a measure of the actor’s perceived desires to engage in the behaviors. As predicted, the effect of actors’ moral beliefs on judgments of their behavior and moral character was mediated by perceived metadesires.

General Discussion

In two studies, we find that actors’ beliefs about their own everyday immoral behaviors affect both how the acts and the actors are evaluated—albeit in opposite directions. An actor’s belief that his or her act is morally wrong causes observers to see the act itself as less morally acceptable, while, at the same time, it leads to more positive character judgments of the actor. In Study 2, we find that these differences in character judgments are mediated by people’s perceptions of the actor’s metadesires. Actors who see their behavior as morally wrong are presumed to have a desire not to engage in it, and this in turn leads to more positive evaluations of their character. These results suggest that one benefit of believing one’s own behavior to be immoral is that others—if they know this—will evaluate one’s character more positively.

(cut)

Honest Hypocrites 

In research on moral judgments of hypocrites, Jordan et al. (2017) found that people who publicly espouse a moral standard that they privately violate are judged particularly negatively.  However, they also found that “honest hypocrites” (those who publicly condemn a behavior while admitting they engage in it themselves) are judged more positively than traditional hypocrites and equivalently to control transgressors (people who simply engage in the negative behavior without taking a public stand on its acceptability). This might seem to contradict our findings in the current studies, where people who transgressed despite thinking that the behavior was morally wrong were judged more positively than those who simply transgressed. We believe the key distinction that explains the difference between Jordan et al.’s results and ours is that in their paradigm, hypocrites publicly condemned others for engaging in the behavior in question.  As Jordan et al. show, public condemnation is interpreted as a strong signal that someone is unlikely to engage in that behavior themselves; hypocrites therefore are disliked both for
engaging in a negative behavior and for falsely signaling (by their public condemnation) that they wouldn’t. Honest hypocrites, who explicitly state that they engage in the negative behavior, are not falsely signaling. However, Jordan et al.’s scenarios imply to participants that honest hypocrites do condemn others—something that may strike people as unfair coming from a person who engages in the behavior themselves. Thus, honest hypocrites may be penalized for public condemnation, even as they are credited for more positive metadesires. In contrast, in our studies participants were told that the scenario protagonists thought the behavior was morally wrong but not that they publicly condemned anyone else for engaging in it. This may have allowed protagonists to benefit from more positive perceived metadesires without being penalized for public condemnation. This explanation is admittedly speculative but could be tested in future research that we outline below.


Suppose you do something bad. Will people blame you more if you knew it was wrong? Or will they blame you less?

The answer seems to be: They will think your act is more wrong, but your character is less bad.

Wednesday, May 31, 2023

Can AI language models replace human participants?

Dillon, D, Tandon, N., Gu, Y., & Gray, K.
Trends in Cognitive Sciences
May 10, 2023

Abstract

Recent work suggests that language models such as GPT can make human-like judgments across a number of domains. We explore whether and when language models might replace human participants in psychological science. We review nascent research, provide a theoretical model, and outline caveats of using AI as a participant.

(cut)

Does GPT make human-like judgments?

We initially doubted the ability of LLMs to capture human judgments but, as we detail in Box 1, the moral judgments of GPT-3.5 were extremely well aligned with human moral judgments in our analysis (r= 0.95;
full details at https://nikett.github.io/gpt-as-participant). Human morality is often argued to be especially difficult for language models to capture and yet we found powerful alignment between GPT-3.5 and human judgments.

We emphasize that this finding is just one anecdote and we do not make any strong claims about the extent to which LLMs make human-like judgments, moral or otherwise. Language models also might be especially good at predicting moral judgments because moral judgments heavily hinge on the structural features of scenarios, including the presence of an intentional agent, the causation of damage, and a vulnerable victim, features that language models may have an easy time detecting.  However, the results are intriguing.

Other researchers have empirically demonstrated GPT-3’s ability to simulate human participants in domains beyond moral judgments, including predicting voting choices, replicating behavior in economic games, and displaying human-like problem solving and heuristic judgments on scenarios from cognitive
psychology. LLM studies have also replicated classic social science findings including the Ultimatum Game and the Milgram experiment. One company (http://syntheticusers.com) is expanding on these
findings, building infrastructure to replace human participants and offering ‘synthetic AI participants’
for studies.

(cut)

From Caveats and looking ahead

Language models may be far from human, but they are trained on a tremendous corpus of human expression and thus they could help us learn about human judgments. We encourage scientists to compare simulated language model data with human data to see how aligned they are across different domains and populations.  Just as language models like GPT may help to give insight into human judgments, comparing LLMs with human judgments can teach us about the machine minds of LLMs; for example, shedding light on their ethical decision making.

Lurking under the specific concerns about the usefulness of AI language models as participants is an age-old question: can AI ever be human enough to replace humans? On the one hand, critics might argue that AI participants lack the rationality of humans, making judgments that are odd, unreliable, or biased. On the other hand, humans are odd, unreliable, and biased – and other critics might argue that AI is just too sensible, reliable, and impartial.  What is the right mix of rational and irrational to best capture a human participant?  Perhaps we should ask a big sample of human participants to answer that question. We could also ask GPT.

Monday, May 29, 2023

Rules

Almeida, G., Struchiner, N., Hannikainen, I.
(April 17, 2023). Kevin Tobia (Ed.), 
Cambridge Handbook of Experimental Jurisprudence. 
Cambridge University Press, Forthcoming

Abstract

Rules are ubiquitous. They figure prominently in all kinds of practical reasoning. Rules are especially important in jurisprudence, occupying a prominent role in answers to the question of “what is law?” In this chapter, we start by reviewing the evidence showing that both textual and extra-textual elements exert influence over rule violation judgments (section II). Most studies about rules contrast text with an extra-textual element identified as the “purpose” or “spirit” of the rule. But what counts as the purpose or the spirit of a rule? Is it the goal intended by the rule maker? Or is purpose necessarily moral? Section III reviews the results of experiments designed to answer these questions. These studies show that the extra-textual element that's relevant for the folk concept of rule is moral in nature. Section IV turns to the different explanations that have been entertained in the literature for the pattern of results described in Sections II and III. In section V we discuss some other extra-textual elements that have been investigated in the literature. Finally, in section VI, we connect the results about rules with other issues in legal philosophy. We conclude with a brief discussion of future directions.

Conclusion

In this chapter, we have provided an overview of the experimental jurisprudence of rules. We started by reviewing evidence that shows that extra-textual elements influence rule violation judgments (section II). We then have seen that those elements are likely moral in nature (section III). There are several ways to conceptualize the relationship between the moral and descriptive elements at play in rule violation judgments. We have reviewed some of them in section IV, where we argued that the evidence favors the hypothesis that the concept of rule has a dual character structure. In section V, we reviewed some recent studies showing that other elements, such as enforcement, also play a role in the concept of rule. Finally, in section VI, we considered the implications of these results for some other debates in legal philosophy.

While we have focused on research developed within experimental jurisprudence, empirical work in moral psychology and experimental philosophy have investigated several other questions related to rules which might be of interest for legal philosophers, such as closure rules and the process of learning rules (Nichols, 2004, 2021). But an even larger set of questions about the concept of rule haven’t been explored from an empirical perspective yet. We will end this chapter by discussing a few of them.


If you do legal work, this chapter may help with your expertise. The authors explore how ordinary people understand the law. Are they more intuitive in terms of interpretation or do they think that law is intrinsically moral?

Sunday, May 14, 2023

Consciousness begins with feeling, not thinking

A. Damasio & H. Dimasio
iai.tv
Originally posted 20 APR 23

Please pause for a moment and notice what you are feeling now. Perhaps you notice a growing snarl of hunger in your stomach or a hum of stress in your chest. Perhaps you have a feeling of ease and expansiveness, or the tingling anticipation of a pleasure soon to come. Or perhaps you simply have a sense that you exist. Hunger and thirst, pain, pleasure and distress, along with the unadorned but relentless feelings of existence, are all examples of ‘homeostatic feelings’. Homeostatic feelings are, we argue here, the source of consciousness.

In effect, feelings are the mental translation of processes occurring in your body as it strives to balance its many systems, achieve homeostasis, and keep you alive. In a conventional sense feelings are part of the mind and yet they offer something extra to the mental processes. Feelings carry spontaneously conscious knowledge concerning the current state of the organism as a result of which you can act to save your life, such as when you respond to pain or thirst appropriately. The continued presence of feelings provides a continued perspective over the ongoing body processes; the presence of feelings lets the mind experience the life process along with other contents present in your mind, namely, the relentless perceptions that collect knowledge about the world along with reasonings, calculations, moral judgments, and the translation of all these contents in language form. By providing the mind with a ‘felt point of view’, feelings generate an ‘experiencer’, usually known as a self. The great mystery of consciousness in fact is the mystery behind the biological construction of this experiencer-self.

In sum, we propose that consciousness is the result of the continued presence of homeostatic feelings. We continuously experience feelings of one kind or another, and feelings naturally tell each of us, automatically, not only that we exist but that we exist in a physical body, vulnerable to discomfort yet open to countless pleasures as well. Feelings such as pain or pleasure provide you with consciousness, directly; they provide transparent knowledge about you. They tell you, in no uncertain terms, that you exist and where you exist, and point to what you need to do to continue existing – for example, treating pain or taking advantage of the well-being that came your way. Feelings illuminate all the other contents of mind with the light of consciousness, both the plain events and the sublime ideas. Thanks to feelings, consciousness fuses the body and mind processes and gives our selves a home inside that partnership.

That consciousness should come ‘down’ to feelings may surprise those who have been led to associate consciousness with the lofty top of the physiological heap. Feelings have been considered inferior to reason for so long that the idea that they are not only the noble beginning of sentient life but an important governor of life’s proceedings may be difficult to accept. Still, feelings and the consciousness they beget are largely about the simple but essential beginnings of sentient life, a life that is not merely lived but knows that it is being lived.

Tuesday, May 9, 2023

Many people in U.S., other advanced economies say it’s not necessary to believe in God to be moral

Janell Fetteroff & Sarah Austin
Pew Research Center
Originally published 20 APR 23

Most Americans say it’s not necessary to believe in God in order to be moral and have good values, according to a spring 2022 Pew Research Center survey. About two-thirds of Americans say this, while about a third say belief in God is an essential component of morality (65% vs. 34%).

However, responses to this question differ dramatically depending on whether Americans see religion as important in their lives. Roughly nine-in-ten who say religion is not too or not at all important to them believe it is possible to be moral without believing in God, compared with only about half of Americans to whom religion is very or somewhat important (92% vs. 51%). Catholics are also more likely than Protestants to hold this view (63% vs. 49%), though views vary across Protestant groups.

There are also divisions along political lines: Democrats and those who lean Democratic are more likely than Republicans and Republican leaners to say it is not necessary to believe in God to be moral (71% vs. 59%). Liberal Democrats are particularly likely to say this (84%), whereas only about half of conservative Republicans (53%) say the same.

In addition, Americans under 50 are somewhat more likely than older adults to say that believing in God is not necessary to have good values (71% vs. 59%). Those with a college degree or higher are also more likely to believe this than those with a high school education or less (76% vs. 58%).

A chart showing that Majorities in most countries say belief in God is not necessary to be moral.

Views of the link between religion and morality differ along similar lines in 16 other countries surveyed. Across those countries, a median of about two-in-three adults say that people can be moral without believing in God, just slightly higher than the share in the United States.

Tuesday, April 25, 2023

Responsible Agency and the Importance of Moral Audience

Jefferson, A., Sifferd, K. 
Ethic Theory Moral Prac (2023).
https://doi.org/10.1007/s10677-023-10385-1

Abstract

Ecological accounts of responsible agency claim that moral feedback is essential to the reasons-responsiveness of agents. In this paper, we discuss McGeer’s scaffolded reasons-responsiveness account in the light of two concerns. The first is that some agents may be less attuned to feedback from their social environment but are nevertheless morally responsible agents – for example, autistic people. The second is that moral audiences can actually work to undermine reasons-responsiveness if they espouse the wrong values. We argue that McGeer’s account can be modified to handle both problems. Once we understand the specific roles that moral feedback plays for recognizing and acting on moral reasons, we can see that autistics frequently do rely on such feedback, although it often needs to be more explicit. Furthermore, although McGeer is correct to highlight the importance of moral feedback, audience sensitivity is not all that matters to reasons-responsiveness; it needs to be tempered by a consistent application of moral rules. Agents also need to make sure that they choose their moral audiences carefully, paying special attention to receiving feedback from audiences which may be adversely affected by their actions.

Conclusions

In this paper we raised two challenges to McGeer’s scaffolded reasons-responsiveness account: agents who are less attuned to social feedback such as autistics, and corrupting moral audiences. We found that, once we parsed the two roles that feedback from a moral audience play, autistics provide reasons to revise the scaffolded reasons-responsiveness account. We argued that autistic persons, like neurotypicals, wish to justify their behaviour to a moral audience and rely on their moral audience for feedback. However, autistic persons may need more explicit feedback when it comes to effects their behaviour has on others. They also compensate for difficulties they have in receiving information from the moral audience by justifying action through appeal to moral rules. This shows that McGeer’s view of moral agency needs to include observance of moral rules as a way of reducing reliance on audience feedback. We suspect that McGeer would approve of this proposal, as she mentions that an instance of blame can lead to vocal protest by the target, and a possible renegotiation of norms and rules for what constitutes acceptable behaviour (2019). Consideration of corrupting audiences highlights a different problem from that of resisting blame and renegotiating norms. It draws attention to cases where individual agents must try to go beyond what is accepted in their moral environment, a significant challenge for social beings who rely strongly on moral audiences in developing and calibrating their moral reasons-responsiveness. Resistance to a moral audience requires the capacity to evaluate the action differently; often this will be with reference to a moral rule or principle.

For both neurotypical and autistic individuals, consistent application of moral rules or principles can reinforce and bring back to mind important moral commitments when we are led astray by our own desires or specific (im)moral audiences. But moral audiences still play a crucial role to developing and maintaining reasons-responsiveness. First, they are essential to the development and maintenance of all agents’ moral sensitivity. Second, they can provide an important moral corrective where people may have moral blindspots, especially when they provide insights into ways in which a person has fallen short morally by not taking on board reasons that are not obvious to them. Often, these can be reasons which pertain to the respectful treatment of others who are in some important way different from that person.


In sum: Be responsible and accountable in your actions, as your moral audience is always watching. Doing the right thing matters not just for your reputation, but for the greater good. #ResponsibleAgency #MoralAudience

Tuesday, April 18, 2023

We need an AI rights movement

Jacy Reese Anthis
The Hill
Originally posted 23 MAR 23

New artificial intelligence technologies like the recent release of GPT-4 have stunned even the most optimistic researchers. Language transformer models like this and Bing AI are capable of conversations that feel like talking to a human, and image diffusion models such as Midjourney and Stable Diffusion produce what looks like better digital art than the vast majority of us can produce. 

It’s only natural, after having grown up with AI in science fiction, to wonder what’s really going on inside the chatbot’s head. Supporters and critics alike have ruthlessly probed their capabilities with countless examples of genius and idiocy. Yet seemingly every public intellectual has a confident opinion on what the models can and can’t do, such as claims from Gary Marcus, Judea Pearl, Noam Chomsky, and others that the models lack causal understanding.

But thanks to tools like ChatGPT, which implements GPT-4, being publicly accessible, we can put these claims to the test. If you ask ChatGPT why an apple falls, it gives a reasonable explanation of gravity. You can even ask ChatGPT what happens to an apple released from the hand if there is no gravity, and it correctly tells you the apple will stay in place. 

Despite these advances, there seems to be consensus at least that these models are not sentient. They have no inner life, no happiness or suffering, at least no more than an insect. 

But it may not be long before they do, and our concepts of language, understanding, agency, and sentience are deeply insufficient to assess the AI systems that are becoming digital minds integrated into society with the capacity to be our friends, coworkers, and — perhaps one day — to be sentient beings with rights and personhood. 

AIs are no longer mere tools like smartphones and electric cars, and we cannot treat them in the same way as mindless technologies. A new dawn is breaking. 

This is just one of many reasons why we need to build a new field of digital minds research and an AI rights movement to ensure that, if the minds we create are sentient, they have their rights protected. Scientists have long proposed the Turing test, in which human judges try to distinguish an AI from a human by speaking to it. But digital minds may be too strange for this approach to tell us what we need to know. 

Monday, April 17, 2023

Generalized Morality Culturally Evolves as an Adaptive Heuristic in Large Social Networks

Jackson, J. C., Halberstadt, J., et al.
(2023, March 22).

Abstract

Why do people assume that a generous person should also be honest? Why can a single criminal conviction destroy someone’s moral reputation? And why do we even use words like “moral” and “immoral”? We explore these questions with a new model of how people perceive moral character. According to this model, people can vary in the extent that they perceive moral character as “localized” (varying across many contextually embedded dimensions) vs. “generalized” (varying along a single dimension from morally bad to morally good). This variation might be at least partly the product of cultural evolutionary adaptations to predicting cooperation in different kinds of social networks. As networks grow larger and more complex, perceptions of generalized morality are increasingly valuable for predicting cooperation during partner selection, especially in novel contexts. Our studies show that social network size correlates with perceptions of generalized morality in US and international samples (Study 1), and that East African hunter-gatherers with greater exposure outside their local region perceive morality as more generalized compared to those who have remained in their local region (Study 2). We support the adaptive value of generalized morality in large and unfamiliar social networks with an agent-based model (Study 3), and experimentally show that generalized morality outperforms localized morality when people predict cooperation in contexts where they have incomplete information about previous partner behavior (Study 4). Our final study shows that perceptions of morality have become more generalized over the last 200 years of English-language history, which suggests that it may be co-evolving with rising social complexity and anonymity in the English-speaking world (Study 5). We also present several supplemental studies which extend our findings. We close by discussing the implications of this theory for the cultural evolution of political systems, religion, and taxonomical theories of morality.

General Discussion

The word“moral” has taken a strange journey over the last several centuries. The word did not yet exist when Plato and Aristotle composed their theories of virtue. It was only when Cicero translated Aristotle’s Nicomachean Ethics that he coined the term “moralis” as the Latin translation of Aristotle’s “Ä“thikós”(Online Etymology Dictionary, n.d.).It is an ironic slight to Aristotle—who favored concrete particulars in lieu of abstract forms—that the word has become increasingly abstract and all-encompassing throughout its lexical evolution, with a meaning that now approaches Plato’s “form of the good.” We doubt that this semantic drift isa coincidence.

Instead, it may signify a cultural evolutionary shift in people’s perceptions of moral character as increasingly generalized as people inhabit increasingly larger and more unfamiliar social networks. Here we support this perspective with five studies. Studies 1-2 find that social network size correlates with the prevalence of generalized morality. Studies 1a-b explicitly tie beliefs in generalized morality to social network size with large surveys.  Study 2 conceptually replicates this finding in a Hadza hunter-gatherer camp, showing that Hadza hunter-gatherers with more external exposure perceive their campmates using more generalized morality. Studies 3-4 show that generalized morality can be adaptive for predicting cooperation in large and unfamiliar networks. Study 3 is an agent-based model which shows that, given plausible assumptions, generalized morality becomes increasingly valuable as social networks grow larger and less familiar. Study 4 is an experiment which shows that generalized morality is particularly valuable when people interact with unfamiliar partners in novel situations. Finally, Study 5 shows that generalized morality has risen over English-language history, such that words for moral attributes (e.g., fair, loyal, caring) have become more semantically generalizable over the last two hundred years of human history.

Saturday, April 15, 2023

Resolving content moderation dilemmas between free speech and harmful misinformation

Kozyreva, A., Herzog, S. M., et al. (2023). 
PNAS of US, 120(7).
https://doi.org/10.1073/pnas.2210666120

Abstract

In online content moderation, two key values may come into conflict: protecting freedom of expression and preventing harm. Robust rules based in part on how citizens think about these moral dilemmas are necessary to deal with this conflict in a principled way, yet little is known about people’s judgments and preferences around content moderation. We examined such moral dilemmas in a conjoint survey experiment where US respondents (N = 2, 564) indicated whether they would remove problematic social media posts on election denial, antivaccination, Holocaust denial, and climate change denial and whether they would take punitive action against the accounts. Respondents were shown key information about the user and their post as well as the consequences of the misinformation. The majority preferred quashing harmful misinformation over protecting free speech. Respondents were more reluctant to suspend accounts than to remove posts and more likely to do either if the harmful consequences of the misinformation were severe or if sharing it was a repeated offense. Features related to the account itself (the person behind the account, their partisanship, and number of followers) had little to no effect on respondents’ decisions. Content moderation of harmful misinformation was a partisan issue: Across all four scenarios, Republicans were consistently less willing than Democrats or independents to remove posts or penalize the accounts that posted them. Our results can inform the design of transparent rules for content moderation of harmful misinformation.

Significance

Content moderation of online speech is a moral minefield, especially when two key values come into conflict: upholding freedom of expression and preventing harm caused by misinformation. Currently, these decisions are made without any knowledge of how people would approach them. In our study, we systematically varied factors that could influence moral judgments and found that despite significant differences along political lines, most US citizens preferred quashing harmful misinformation over protecting free speech. Furthermore, people were more likely to remove posts and suspend accounts if the consequences of the misinformation were severe or if it was a repeated offense. Our results can inform the design of transparent, consistent rules for content moderation that the general public accepts as legitimate.

Discussion

Content moderation is controversial and consequential. Regulators are reluctant to restrict harmful but legal content such as misinformation, thereby leaving platforms to decide what content to allow and what to ban. At the heart of policy approaches to online content moderation are trade-offs between fundamental values such as freedom of expression and the protection of public health. In our investigation of which aspects of content moderation dilemmas affect people’s choices about these trade-offs and what impact individual attitudes have on these decisions, we found that respondents’ willingness to remove posts or to suspend an account increased with the severity of the consequences of misinformation and whether the account had previously posted misinformation. The topic of the misinformation also mattered—climate change denial was acted on the least, whereas Holocaust denial and election denial were acted on more often, closely followed by antivaccination content. In contrast, features of the account itself—the person behind the account, their partisanship, and number of followers—had little to no effect on respondents’ decisions. In sum, the individual characteristics of those who spread misinformation mattered little, whereas the amount of harm, repeated offenses, and type of content mattered the most.

Monday, April 3, 2023

The Mercy Workers

Melanie Garcia
The Marshall Project
Originally published 2 March 2023

Here are two excerpts:

Like her more famous anti-death penalty peers, such as Bryan Stevenson and Sister Helen Prejean, Baldwin argues the idea that people should be judged on more than their worst actions. But she also speaks in more spiritual terms about the value of unearthing her clients’ lives. “We look through a more merciful lens,” she told me, describing her role as that of a “witness who knows and understands, without condemning.” This work, she believes, can have a healing effect on the client, the people they hurt, and even society as a whole. “The horrible thing to see is the crime,” she said. “We’re saying, ‘Please, please, look past that, there’s a person here, and there’s more to it than you think.’”

The United States has inherited competing impulses: It’s “an eye for an eye,” but also “blessed are the merciful.” Some Americans believe that our criminal justice system — rife with excessively long sentences, appalling prison conditions and racial disparities — fails to make us safer. And yet, tell the story of a violent crime and a punishment that sounds insufficient, and you’re guaranteed to get eyerolls.

In the midst of that impasse, I’ve come to see mitigation specialists like Baldwin as ambassadors from a future where we think more richly about violence. For the last few decades, they have documented the traumas, policy failures, family dynamics and individual choices that shape the lives of people who kill. Leaders in the field say it’s impossible to accurately count mitigation specialists — there is no formal license — but there may be fewer than 1,000. They’ve actively avoided media attention, and yet the stories they uncover occasionally emerge in Hollywood scripts and Supreme Court opinions. Over three decades, mitigation specialists have helped drive down death sentences from more than 300 annually in the mid-1990s to fewer than 30 in recent years.

(cut)

The term “mitigation specialist” is often credited to Scharlette Holdman, a brash Southern human rights activist famous for her personal devotion to her clients. The so-called Unabomber, Ted Kaczynski, tried to deed his cabin to her. (The federal government stopped him.) Her last client was accused 9/11 plotter Khalid Shaikh Mohammad. While working his case, Holdman converted to Islam and made a pilgrimage to Mecca. She died in 2017 and had a Muslim burial.

Holdman began a crusade to stop executions in Florida in the 1970s, during a unique moment of American ambivalence towards the punishment. After two centuries of hangings, firing squads and electrocutions, the Supreme Court struck down the death penalty in 1972. The court found that there was no logic guiding which prisoners were executed and which were spared.

The justices eventually let executions resume, but declared, in the 1976 case of Woodson v. North Carolina, that jurors must be able to look at prisoners as individuals and consider “compassionate or mitigating factors stemming from the diverse frailties of humankind.”

Sunday, April 2, 2023

Being good to look good: Self-reported moral character predicts moral double standards among reputation-seeking individuals

Mengchen, D. Kupfer, T. R, et al. (2022).
British Journal of Psychology
First published 4 NOV 22

Abstract

Moral character is widely expected to lead to moral judgements and practices. However, such expectations are often breached, especially when moral character is measured by self-report. We propose that because self-reported moral character partly reflects a desire to appear good, people who self-report a strong moral character will show moral harshness towards others and downplay their own transgressions—that is, they will show greater moral hypocrisy. This self-other discrepancy in moral judgements should be pronounced among individuals who are particularly motivated by reputation. Employing diverse methods including large-scale multination panel data (N = 34,323), and vignette and behavioural experiments (N = 700), four studies supported our proposition, showing that various indicators of moral character (Benevolence and Universalism values, justice sensitivity, and moral identity) predicted harsher judgements of others' more than own transgressions. Moreover, these double standards emerged particularly among individuals possessing strong reputation management motives. The findings highlight how reputational concerns moderate the link between moral character and moral judgement.

Practitioner points
  • Self-reported moral character does not predict actual moral performance well.
  • Good moral character based on self-report can sometimes predict strong moral hypocrisy.
  • Good moral character based on self-report indicates high moral standards, while only for others but not necessarily for the self.
  • Hypocrites can be good at detecting reputational cues and presenting themselves as morally decent persons.
From the General Discussion

A well-known Golden Rule of morality is to treat others as you wish to be treated yourself (Singer, 1963). People with a strong moral character might be expected to follow this Golden Rule, and judge others no more harshly than they judge themselves. However, when moral character is measured by self-reports, it is often intertwined with socially desirable responding and reputation management motives (Anglim et al., 2017; Hertz & Krettenauer, 2016; Reed & Aquino, 2003). The current research examines the potential downstream effects of moral character and reputation management motives on moral decisions. By attempting to differentiate the ‘genuine’ and ‘reputation managing’ components of self-reported moral character, we posited an association between moral character and moral double standards on the self and others. Imposing harsh moral standards on oneself often comes with a cost to self-interest; to signal one's moral character, criticizing others' transgressions can be a relatively cost-effective approach (Jordan et al., 2017; Kupfer & Giner-Sorolla, 2017; Simpson et al., 2013). To the extent that the demonstration of a strong moral character is driven by reputation management motives, we, therefore, predicted that it would be related to increased hypocrisy, that is, harsher judgements of others' transgressions but not stricter standards for own misdeeds.

Conclusion

How moral character guides moral judgements and behaviours depends on reputation management motives. When people are motivated to attain a good reputation, their self-reported moral character may predict more hypocrisy by displaying stronger moral harshness towards others than towards themselves. Thus, claiming oneself as a moral person does not always translate into doing good deeds, but can manifest as showcasing one's morality to others. Desires for a positive reputation might help illuminate why self-reported moral character often fails to capture real-life moral decisions, and why (some) people who appear to be moral are susceptible to accusations of hypocrisy—for applying higher moral standards to others than to themselves.

Monday, March 27, 2023

White Supremacist Networks Gab and 8Kun Are Training Their Own AI Now

David Gilbert
Vice News
Originally posted 22 FEB 23

Here are two excerpts:

Artificial intelligence is everywhere right now, and many are questioning the safety and morality of the AI systems released by some of the world’s biggest companies, including Open AI’s ChatGPT, Bing’s Sydney, and Google’s Bard. It was only a matter of time until the online spaces where extremists gather became interested in the technology.

Gab is a social network filled with homophobic, christian nationalist and white supremacist content. On Tuesday its CEO Andrew Torba announced the launch of its AI image generator, Gabby.

“At Gab, we have been experimenting with different AI systems that have popped up over the past year,” Torba wrote in a statement. “Every single one is skewed with a liberal/globalist/talmudic/satanic worldview. What if Gab AI Inc builds a Gab .ai (see what I did there?) that is based, has no ‘hate speech” filters and doesn’t obfuscate and distort historical and Biblical Truth?”

Gabby is currently live on Gab’s site and available to all members. Like Midjourney and DALL-E, it is an image generator that users interact with by sending it a prompt, and within seconds it will generate entirely new images based on that prompt.

Echoing his past criticisms of Big Tech platforms like Facebook and Twitter, Torba claims that mainstream platforms are now “censoring” their AI systems to prevent people from discussing right-wing topics such as Christian nationalism. Torba’s AI, by contrast, will have ”the ability to speak freely without the constraints of liberal propaganda wrapped tightly around its neck.”

(cut)

8chan, which was founded to support the Gamergate movement, became the home of QAnon in early 2018 and was taken offline in August 2019 after the man who killed 20 people at an El Paso Walmart posted an anti-immigrant screed on the site.

Watkins has been speaking about his AI system for a few weeks now, but has yet to reveal how it will work or when it will launch. Watkins’ central selling point, like Torba’s, appears to be that his system will be “uncensored.”

“So that we can compete against these people that are putting up all of these false flags and illusions,” Watkins said on Feb. 13 when he was asked why he was creating an AI system.  “We are working on our own AI that is going to give you an uncensored look at the way things are going,” Watkins said in a video interview at the end of January.But based on some of the images the engine is churning out, Watkins still has a long way to go to perfect his AI image generator.

Saturday, March 25, 2023

A Christian Health Nonprofit Saddled Thousands With Debt as It Built a Family Empire Including a Pot Farm, a Bank and an Airline

Ryan Gabrielson & J. David McSwane
ProPublic.org
Originally published 25 FEB 23

Here is an excerpt:

Four years after its launch in 2014, the ministry enrolled members in almost every state and collected $300 million in annual revenue. Liberty used the money to pay at least $140 million to businesses owned and operated by Beers family members and friends over a seven-year period, the investigation found. The family then funneled the money through a network of shell companies to buy a private airline in Ohio, more than $20 million in real estate holdings and scores of other businesses, including a winery in Oregon that they turned into a marijuana farm. The family calls this collection of enterprises “the conglomerate.”

Beers has disguised his involvement in Liberty. He has never been listed as a Liberty executive or board member, and none of the family’s 50-plus companies or assets are in his name, records show.

From the family’s 700-acre ranch north of Canton, however, Beers acts as the shadow lord of a financial empire. It was built from money that people paid to Liberty, Beers’ top lieutenant confirmed to ProPublica. He plays in high-stakes poker tournaments around the country, travels to the Caribbean and leads big-game hunts at a vast hunting property in Canada, which the family partly owns. He is a man, said one former Liberty executive, with all the “trappings of large money coming his way.”

Despite abundant evidence of fraud, much of it detailed in court records and law enforcement files obtained by ProPublica, members of the Beers family have flourished in the health care industry and have never been prevented from running a nonprofit. Instead, the family’s long and lucrative history illustrates how health care sharing ministries thrive in a regulatory no man’s land where state insurance commissioners are barred from investigating, federal agencies turn a blind eye and law enforcement settles for paltry civil settlements.

The Ohio attorney general has twice investigated Beers for activities that financial crimes investigators said were probable felonies. Instead, the office settled for civil fines, most recently in 2021. It also required Liberty to sever its ties to some Beers family members.

The IRS has pursued individual family members for underreporting their income and failing to pay million-dollar tax bills. But there’s no indication that the IRS has investigated how several members of one family amassed such substantial wealth in just seven years by running a Christian nonprofit.

The agencies’ failure to move decisively against the Beers family has left Liberty members struggling with millions of dollars in medical debt. Many have joined a class-action lawsuit accusing the nonprofit of fraud.

After years of complaints, health care sharing ministries are now attracting more scrutiny. Sharity Ministries, once among the largest organizations in the industry, filed for bankruptcy and then dissolved in 2021 as regulators in multiple states investigated its failure to pay members’ bills. In January, the Justice Department seized the assets of a small Missouri-based ministry, Medical Cost Sharing Inc., and those of its founders, accusing them of fraud and self-enrichment. The founders have denied the government’s allegations.

Thursday, March 23, 2023

Are there really so many moral emotions? Carving morality at its functional joints

Fitouchi L., André J., & Baumard N.
To appear in L. Al-Shawaf & T. K. Shackelford (Eds.)
The Oxford Handbook of Evolution and the Emotions.
New York: Oxford University Press.

Abstract

In recent decades, a large body of work has highlighted the importance of emotional processes in moral cognition. Since then, a heterogeneous bundle of emotions as varied as anger, guilt, shame, contempt, empathy, gratitude, and disgust have been proposed to play an essential role in moral psychology.  However, the inclusion of these emotions in the moral domain often lacks a clear functional rationale, generating conflations between merely social and properly moral emotions. Here, we build on (i) evolutionary theories of morality as an adaptation for attracting others’ cooperative investments, and on (ii) specifications of the distinctive form and content of moral cognitive representations. On this basis, we argue that only indignation (“moral anger”) and guilt can be rigorously characterized as moral emotions, operating on distinctively moral representations. Indignation functions to reclaim benefits to which one is morally entitled, without exceeding the limits of justice. Guilt functions to motivate individuals to compensate their violations of moral contracts. By contrast, other proposed moral emotions (e.g. empathy, shame, disgust) appear only superficially associated with moral cognitive contents and adaptive challenges. Shame doesn’t track, by design, the respect of moral obligations, but rather social valuation, the two being not necessarily aligned. Empathy functions to motivate prosocial behavior between interdependent individuals, independently of, and sometimes even in contradiction with the prescriptions of moral intuitions. While disgust is often hypothesized to have acquired a moral role beyond its pathogen-avoidance function, we argue that both evolutionary rationales and psychological evidence for this claim remain inconclusive for now.

Conclusion

In this chapter, we have suggested that a specification of the form and function of moral representations leads to a clearer picture of moral emotions. In particular, it enables a principled distinction between moral and non-moral emotions, based on the particular types of cognitive representations they process. Moral representations have a specific content: they represent a precise quantity of benefits that cooperative partners owe each other, a legitimate allocation of costs and benefits that ought to be, irrespective of whether it is achieved by people’s actual behaviors. Humans intuit that they have a duty not to betray their coalition, that innocent people do not deserve to be harmed, that their partner has a right not to be cheated on. Moral emotions can thus be defined as superordinate programs orchestrating cognition, physiology and behavior in accordance with the specific information encoded in these moral representations.    On this basis, indignation and guilt appear as prototypical moral emotions. Indignation (“moral anger”) is activated when one receives fewer benefits than one deserves, and recruits bargaining mechanisms to enforce the violated moral contract. Guilt, symmetrically, is sensitive to one’s failure to honor one’s obligations toward others, and motivates compensation to provide them the missing benefits they deserve. By contrast, often-proposed “moral” emotions – shame, empathy, disgust – seem not to function to compute distinctively moral representations of cooperative obligations, but serve other, non-moral functions – social status management, interdependence, and pathogen avoidance (Figure 2). 

Tuesday, March 7, 2023

FTC to Ban BetterHelp from Revealing Consumers’ Data, Including Sensitive Mental Health Information, to Facebook and Others for Targeted Advertising

Federal Trade Commission
Press Release
Originally released 2 MAR 23

The Federal Trade Commission has issued a proposed order banning online counseling service BetterHelp, Inc. from sharing consumers’ health data, including sensitive information about mental health challenges, for advertising. The proposed order also requires the company to pay $7.8 million to consumers to settle charges that it revealed consumers’ sensitive data with third parties such as Facebook and Snapchat for advertising after promising to keep such data private.

This is the first Commission action returning funds to consumers whose health data was compromised. In addition, the FTC’s proposed order will ban BetterHelp from sharing consumers’ personal information with certain third parties for re-targeting—the targeting of advertisements to consumers who previously had visited BetterHelp’s website or used its app, including those who had not signed up for the company’s counseling service. The proposed order also will limit the ways in which BetterHelp can share consumer data going forward.

"When a person struggling with mental health issues reaches out for help, they do so in a moment of vulnerability and with an expectation that professional counseling services will protect their privacy,” said Samuel Levine, Director of the FTC's Bureau of Consumer Protection. "Instead, BetterHelp betrayed consumers’ most personal health information for profit. Let this proposed order be a stout reminder that the FTC will prioritize defending Americans’ sensitive data from illegal exploitation."

California-based BetterHelp offers online counseling services under several names, including BetterHelp Counseling. It also markets services aimed at specific groups such as Faithful Counseling focused on Christians, Teen Counseling, which caters to teens and requires parental consent, and Pride Counseling, which is targeted to the LGBTQ community. Consumers interested in BetterHelp’s services must fill out a questionnaire that asks for sensitive mental health information—such as whether they have experienced depression or suicidal thoughts and are on any medications. They also provide their name, email address, birth date and other personal information. Consumers are then matched with a counselor and pay between $60 and $90 per week for counseling.

At several points in the signup process, BetterHelp promised consumers that it would not use or disclose their personal health data except for limited purposes, such as to provide counseling services. Despite these promises, BetterHelp used and revealed consumers’ email addresses, IP addresses, and health questionnaire information to Facebook, Snapchat, Criteo, and Pinterest for advertising purposes, according to the FTC’s complaint. 

For example, the company used consumers’ email addresses and the fact that they had previously been in therapy to instruct Facebook to identify similar consumers and target them with advertisements for BetterHelp’s counseling service, which helped the company bring in tens of thousands of new paying users and millions of dollars in revenue.

According to the complaint, BetterHelp pushed consumers to hand over their health information by repeatedly showing them privacy misrepresentations and nudging them with unavoidable prompts to sign up for its counseling service. Despite collecting such sensitive information, BetterHelp failed to maintain sufficient policies or procedures to protect it and did not obtain consumers’ affirmative express consent before disclosing their health data. BetterHelp also failed to place any limits on how third parties could use consumers’ health information—allowing Facebook and other third parties to use that information for their own internal purposes, including for research and development or to improve advertising.

Wednesday, March 1, 2023

Cognitive Control Promotes Either Honesty or Dishonesty, Depending on One's Moral Default

Speer, S. P., Smidts, A., & Boksem, M. A. S. (2021).
The Journal of Neuroscience, 41(42), 8815–8825. 
https://doi.org/10.1523/jneurosci.0666-21.2021

Abstract

Cognitive control is crucially involved in making (dis)honest decisions. However, the precise nature of this role has been hotly debated. Is honesty an intuitive response, or is will power needed to override an intuitive inclination to cheat? A reconciliation of these conflicting views proposes that cognitive control enables dishonest participants to be honest, whereas it allows those who are generally honest to cheat. Thus, cognitive control does not promote (dis)honesty per se; it depends on one's moral default. In the present study, we tested this proposal using electroencephalograms in humans (males and females) in combination with an independent localizer (Stroop task) to mitigate the problem of reverse inference. Our analysis revealed that the neural signature evoked by cognitive control demands in the Stroop task can be used to estimate (dis)honest choices in an independent cheating task, providing converging evidence that cognitive control can indeed help honest participants to cheat, whereas it facilitates honesty for cheaters.

Significance Statement

Dishonesty causes enormous economic losses. To target dishonesty with interventions, a rigorous understanding of the underlying cognitive mechanisms is required. A recent study found that cognitive control enables honest participants to cheat, whereas it helps cheaters to be honest. However, it is evident that a single study does not suffice as support for a novel hypothesis. Therefore, we tested the replicability of this finding using a different modality (EEG instead of fMRI) together with an independent localizer task to avoid reverse inference. We find that the same neural signature evoked by cognitive control demands in the localizer task can be used to estimate (dis)honesty in an independent cheating task, establishing converging evidence that the effect of cognitive control indeed depends on a person's moral default.

From the Discussion section

Previous research has deduced the involvement of cognitive control in moral decision-making through relating observed activations to those observed for cognitive control tasks in prior studies (Greene and Paxton, 2009; Abe and Greene, 2014) or with the help of meta-analytic evidence (Speer et al., 2020) from the Neurosynth platform (Yarkoni et al., 2011). This approach, which relies on reverse inference, must be used with caution because any given brain area may be involved in several different cognitive processes, which makes it difficult to conclude that activation observed in a particular brain area represents one specific function (Poldrack, 2006). Here, we extend prior research by providing more rigorous evidence by means of explicitly eliciting cognitive control in a separate localizer task and then demonstrating that this same neural signature can be identified in the Spot-The-Difference task when participants are exposed to the opportunity to cheat. Moreover, using similarity analysis we provide a direct link between the neural signature of cognitive control, as elicited by the Stroop task, and (dis)honesty by showing that time-frequency patterns of cognitive control demands in the Stroop task are indeed similar to those observed when tempted to cheat in the Spot-The-Difference task. These results provide strong evidence that cognitive control processes are recruited when individuals are tempted to cheat.