Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Morality. Show all posts
Showing posts with label Morality. Show all posts

Thursday, March 14, 2024

A way forward for responsibility in the age of AI

Gogoshin, D.L.
Inquiry (2024)

Abstract

Whatever one makes of the relationship between free will and moral responsibility – e.g. whether it’s the case that we can have the latter without the former and, if so, what conditions must be met; whatever one thinks about whether artificially intelligent agents might ever meet such conditions, one still faces the following questions. What is the value of moral responsibility? If we take moral responsibility to be a matter of being a fitting target of moral blame or praise, what are the goods attached to them? The debate concerning ‘machine morality’ is often hinged on whether artificial agents are or could ever be morally responsible, and it is generally taken for granted (following Matthias 2004) that if they cannot, they pose a threat to the moral responsibility system and associated goods. In this paper, I challenge this assumption by asking what the goods of this system, if any, are, and what happens to them in the face of artificially intelligent agents. I will argue that they neither introduce new problems for the moral responsibility system nor do they threaten what we really (ought to) care about. I conclude the paper with a proposal for how to secure this objective.


Here is my summary:

While AI may not possess true moral agency, it's crucial to consider how the development and use of AI can be made more responsible. The author challenges the assumption that AI's lack of moral responsibility inherently creates problems for our current system of ethics. Instead, they focus on the "goods" this system provides, such as deserving blame or praise, and how these can be upheld even with AI's presence. To achieve this, the author proposes several steps, including:
  1. Shifting the focus from AI's moral agency to the agency of those who design, build, and use it. This means holding these individuals accountable for the societal impacts of AI.
  2. Developing clear ethical guidelines for AI development and use. These guidelines should be comprehensive, addressing issues like fairness, transparency, and accountability.
  3. Creating robust oversight mechanisms. This could involve independent bodies that monitor AI development and use, and have the power to intervene when necessary.
  4. Promoting public understanding of AI. This will help people make informed decisions about how AI is used in their lives and hold developers and users accountable.

Tuesday, March 12, 2024

Discerning Saints: Moralization of Intrinsic Motivation and Selective Prosociality at Work

Kwon, M., Cunningham, J. L., & 
Jachimowicz, J. M. (2023).
Academy of Management Journal, 66(6),
1625–1650.

Abstract

Intrinsic motivation has received widespread attention as a predictor of positive work outcomes, including employees’ prosocial behavior. We offer a more nuanced view by proposing that intrinsic motivation does not uniformly increase prosocial behavior toward all others. Specifically, we argue that employees with higher intrinsic motivation are more likely to value intrinsic motivation and associate it with having higher morality (i.e., they moralize it). When employees moralize intrinsic motivation, they perceive others with higher intrinsic motivation as being more moral and thus engage in more prosocial behavior toward those others, and judge others who are less intrinsically motivated as less moral and thereby engage in less prosocial behaviors toward them. We provide empirical support for our theoretical model across a large-scale, team-level field study in a Latin American financial institution (n = 784, k = 185) and a set of three online studies, including a preregistered experiment (n = 245, 243, and 1,245), where we develop a measure of the moralization of intrinsic motivation and provide both causal and mediating evidence. This research complicates our understanding of intrinsic motivation by revealing how its moralization may at times dim the positive light of intrinsic motivation itself.

The article is paywalled.  Here are some thoughts:

This study focuses on how intrinsically motivated employees (those who enjoy their work) might act differently towards other employees depending on their own level of intrinsic motivation. The key points are:

Main finding: Employees with high intrinsic motivation tend to associate higher morality with others who also have high intrinsic motivation. This leads them to offer more help and support to those similar colleagues, while judging and helping less to those with lower intrinsic motivation.

Theoretical framework: The concept of "moralization of intrinsic motivation" (MOIM) explains this behavior. Essentially, intrinsic motivation becomes linked to moral judgment, influencing who is seen as "good" and deserving of help.

Implications:
  • For theory: This research adds a new dimension to understanding intrinsic motivation, highlighting the potential for judgment and selective behavior.
  • For practice: Managers and leaders should be aware of the unintended consequences of promoting intrinsic motivation, as it might create bias and division among employees.
  • For employees: Those lacking intrinsic motivation might face disadvantages due to judgment from colleagues. They could try job crafting or seeking alternative support strategies.
Overall, the study reveals a nuanced perspective on intrinsic motivation, acknowledging its positive aspects while recognizing its potential to create inequality and ethical concerns.

Monday, March 11, 2024

Why People Fail to Notice Horrors Around Them

Tali Sharot and Cass R. Sunstein
The New York Times
Originally posted 25 Feb 24

The miraculous history of our species is peppered with dark stories of oppression, tyranny, bloody wars, savagery, murder and genocide. When looking back, we are often baffled and ask: Why weren't the horrors halted earlier? How could people have lived with them?

The full picture is immensely complicated. But a significant part of it points to the rules that govern the operations of the human brain.

Extreme political movements, as well as deadly conflicts, often escalate slowly. When threats start small and increase gradually, they end up eliciting a weaker emotional reaction, less resistance and more acceptance than they would otherwise. The slow increase allows larger and larger horrors to play out in broad daylight- taken for granted, seen as ordinary.

One of us is a neuroscientist; the other is a law professor. From our different fields, we have come to believe that it is not possible to understand the current period - and the shifts in what counts as normal - without appreciating why and how people do not notice so much of what we live with.

The underlying reason is a pivotal biological feature of our brain: habituation, or our tendency to respond less and less to things that are constant or that change slowly. You enter a cafe filled with the smell of coffee and at first the smell is overwhelming, but no more than 20 minutes go by and you cannot smell it any longer. This is because your olfactory neurons stop firing in response to a now-familiar odor.

Similarly, you stop hearing the persistent buzz of an air-conditioner because your brain filters out background noise. Your brain cares about what recently changed, not about what remained the same.
Habituation is one of our most basic biological characteristics - something that we two-legged, bigheaded creatures share with other animals on earth, including apes, elephants, dogs, birds, frogs, fish and rats. Human beings also habituate to complex social circumstances such as war, corruption, discrimination, oppression, widespread misinformation and extremism. Habituation does not only result in a reduced tendency to notice and react to grossly immoral deeds around us; it also increases the likelihood that we will engage in them ourselves.


Here is my summary:

From a psychological perspective, the failure to notice horrors around us can be attributed to cognitive biases and the human tendency to see reality in predictable yet flawed ways. This phenomenon is linked to how individuals perceive and value certain aspects of their environment. Personal values play a crucial role in shaping our perceptions and emotional responses. When there is a discrepancy between our self-perception and reality, it can lead to various troubles as our values define us and influence how we react to events. Additionally, the concept of safety needs is highlighted as a mediating factor in mental disorders induced by stressful events. The unexpected nature of events can trigger fear and anger, while the anticipation of events can induce calmness. This interplay between safety needs, emotions, and pathological conditions underscores how individuals react to perceived threats and unexpected situations, impacting their mental well-being

Sunday, March 10, 2024

MAGA’s Violent Threats Are Warping Life in America

David French
New York Times - Opinion
Originally published 18 Feb 24

Amid the constant drumbeat of sensational news stories — the scandals, the legal rulings, the wild political gambits — it’s sometimes easy to overlook the deeper trends that are shaping American life. For example, are you aware how much the constant threat of violence, principally from MAGA sources, is now warping American politics? If you wonder why so few people in red America seem to stand up directly against the MAGA movement, are you aware of the price they might pay if they did?

Late last month, I listened to a fascinating NPR interview with the journalists Michael Isikoff and Daniel Klaidman regarding their new book, “Find Me the Votes,” about Donald Trump’s efforts to overturn the 2020 election. They report that Georgia prosecutor Fani Willis had trouble finding lawyers willing to help prosecute her case against Trump. Even a former Georgia governor turned her down, saying, “Hypothetically speaking, do you want to have a bodyguard follow you around for the rest of your life?”

He wasn’t exaggerating. Willis received an assassination threat so specific that one evening she had to leave her office incognito while a body double wearing a bulletproof vest courageously pretended to be her and offered a target for any possible incoming fire.


Here is my summary of the article:

David French discusses the pervasive threat of violence, particularly from MAGA sources, and its impact on American politics. The author highlights instances where individuals faced intimidation and threats for opposing the MAGA movement, such as a Georgia prosecutor receiving an assassination threat and judges being swatted. The article also mentions the significant increase in threats against members of Congress since Trump took office, with Capitol Police opening over 8,000 threat assessments in a year. The piece sheds light on the chilling effect these threats have on individuals like Mitt Romney, who spends $5,000 per day on security, and lawmakers who fear for their families' safety. The overall narrative underscores how these violent threats are shaping American life and politics

Thursday, March 7, 2024

Canada Postpones Plan to Allow Euthanasia for Mentally Ill

Craig McCulloh
Voice of America News
Originally posted 8 Feb 24

The Canadian government is delaying access to medically assisted death for people with mental illness.

Those suffering from mental illness were supposed to be able to access Medical Assistance in Dying — also known as MAID — starting March 17. The recent announcement by the government of Canadian Prime Minister Justin Trudeau was the second delay after original legislation authorizing the practice passed in 2021.

The delay came in response to a recommendation by a majority of the members of a committee made up of senators and members of Parliament.

One of the most high-profile proponents of MAID is British Columbia-based lawyer Chris Considine. In the mid-1990s, he represented Sue Rodriguez, who was dying from amyotrophic lateral sclerosis, commonly known as ALS.

Their bid for approval of a medically assisted death was rejected at the time by the Supreme Court of Canada. But a law passed in 2016 legalized euthanasia for individuals with terminal conditions. From then until 2022, more than 45,000 people chose to die.


Summary:

Canada originally planned to expand its Medical Assistance in Dying (MAiD) program to include individuals with mental illnesses in March 2024.
  • This plan has been postponed until 2027 due to concerns about the healthcare system's readiness and potential ethical issues.
  • The original legislation passed in 2021, but concerns about safeguards and mental health support led to delays.
  • This issue is complex and ethically charged, with advocates arguing for individual autonomy and opponents raising concerns about coercion and vulnerability.
I would be concerned about the following issues:
  • Vulnerability: Mental illness can impair judgement, raising concerns about informed consent and potential coercion.
  • Safeguards: Concerns exist about insufficient safeguards to prevent abuse or exploitation.
  • Mental health access: Limited access to adequate mental health treatment could contribute to undue pressure towards MAiD.
  • Social inequalities: Concerns exist about disproportionate access to MAiD based on socioeconomic background.

Wednesday, March 6, 2024

We're good people: Moral conviction as social identity

Ekstrom, P. D. (2022, April 27).

Abstract

Moral convictions—attitudes that people construe as matters of right and wrong—have unique effects on behavior, from activism to intolerance. Less is known, though, about the psychological underpinnings of moral convictions themselves. I propose that moral convictions are social identities. Consistent with the idea that moral convictions are identities, I find in two studies that attitude-level moral conviction predicts (1) attitudes’ self-reported identity centrality and (2) reaction time to attitude-related stimuli in a me/not me task. Consistent with the idea that moral convictions are social identities, I find evidence that participants used their moral convictions to perceive, categorize, and remember information about other individuals’ positions on political issues, and that they did so more strongly when their convictions were more identity-central. In short, the identities that participants’ moral convictions defined were also meaningful social categories, providing a basis to distinguish “us” from “them.” However, I also find that non-moral attitudes can serve as meaningful social categories. Although moral convictions were more identity-central than non-moral attitudes, moral and non-moral attitudes may both define social identities that are more or less salient in certain situations. Regardless, social identity may help explain intolerance for moral disagreement, and identity-based interventions may help reduce that intolerance.

Here is my summary:

Main Hypothesis:
  • Moral convictions (beliefs about right and wrong) are seen as fundamental and universally true, distinct from other attitudes.
  • The research proposes that they shape how people view themselves and others, acting as social identities.
Key Points:
  • Moral convictions define group belonging: People use them to categorize themselves and others as "good" or "bad," similar to how we might use group affiliations like race or religion.
  • They influence our relationships: We tend to be more accepting and trusting of those who share our moral convictions.
  • They can lead to conflict: When morals clash, it can create animosity and division between groups with different convictions.
Evidence:
  • The research cites studies showing how people judge others based on their moral stances, similar to how they judge based on group membership.
  • It also shows how moral convictions predict behavior like activism and intolerance towards opposing views.
Implications:
  • Understanding how moral convictions function as social identities can help explain conflict, prejudice, and social movements.
  • It may also offer insights into promoting understanding and cooperation between groups with differing moral beliefs.
Overall:

This research suggests that moral convictions are more than just strong opinions; they act as powerful social identities shaping how we see ourselves and interact with others. Understanding this dynamic can offer valuable insights into social behavior and potential avenues for promoting tolerance and cooperation.

Tuesday, March 5, 2024

You could lie to a health chatbot – but it might change how you perceive yourself

Dominic Wilkinson
The Conversation
Originally posted 8 FEB 24

Here is an excerpt:

The ethics of lying

There are different ways that we can think about the ethics of lying.

Lying can be bad because it causes harm to other people. Lies can be deeply hurtful to another person. They can cause someone to act on false information, or to be falsely reassured.

Sometimes, lies can harm because they undermine someone else’s trust in people more generally. But those reasons will often not apply to the chatbot.

Lies can wrong another person, even if they do not cause harm. If we willingly deceive another person, we potentially fail to respect their rational agency, or use them as a means to an end. But it is not clear that we can deceive or wrong a chatbot, since they don’t have a mind or ability to reason.

Lying can be bad for us because it undermines our credibility. Communication with other people is important. But when we knowingly make false utterances, we diminish the value, in other people’s eyes, of our testimony.

For the person who repeatedly expresses falsehoods, everything that they say then falls into question. This is part of the reason we care about lying and our social image. But unless our interactions with the chatbot are recorded and communicated (for example, to humans), our chatbot lies aren’t going to have that effect.

Lying is also bad for us because it can lead to others being untruthful to us in turn. (Why should people be honest with us if we won’t be honest with them?)

But again, that is unlikely to be a consequence of lying to a chatbot. On the contrary, this type of effect could be partly an incentive to lie to a chatbot, since people may be conscious of the reported tendency of ChatGPT and similar agents to confabulate.


Here is my summary:

The article discusses the potential consequences of lying to a health chatbot, even though it might seem tempting. It highlights a situation where someone frustrated with a wait for surgery considers exaggerating their symptoms to a chatbot screening them.

While lying might offer short-term benefits like quicker attention, the author argues it could have unintended consequences:

Impact on healthcare:
  • Inaccurate information can hinder proper diagnosis and treatment.
  • It contributes to an already strained healthcare system.
Self-perception:
  • Repeatedly lying, even to a machine, can erode honesty and integrity.
  • It reinforces unhealthy avoidance of seeking professional help.
The article encourages readers to be truthful with chatbots for better healthcare outcomes and self-awareness. It acknowledges the frustration with healthcare systems but emphasizes the importance of transparency for both individual and collective well-being.

Monday, March 4, 2024

How to Deal with Counter-Examples to Common Morality Theory: A Surprising Result

Herissone-Kelly P.
Cambridge Quarterly of Healthcare Ethics.
2022;31(2):185-191.
doi:10.1017/S096318012100058X

Abstract

Tom Beauchamp and James Childress are confident that their four principles—respect for autonomy, beneficence, non-maleficence, and justice—are globally applicable to the sorts of issues that arise in biomedical ethics, in part because those principles form part of the common morality (a set of general norms to which all morally committed persons subscribe). Inevitably, however, the question arises of how the principlist ought to respond when presented with apparent counter-examples to this thesis. I examine a number of strategies the principlist might adopt in order to retain common morality theory in the face of supposed counter-examples. I conclude that only a strategy that takes a non-realist view of the common morality’s principles is viable. Unfortunately, such a view is likely not to appeal to the principlist.


Herissone-Kelly examines various strategies principlism could employ to address counter-examples:

Refine the principles: This involves clarifying or reinterpreting the principles to better handle specific cases.
  • Prioritize principles: Establish a hierarchy among the principles to resolve conflicts.
  • Supplement the principles: Introduce additional considerations or context-specific factors.
  • Limit the scope: Acknowledge that the principles may not apply universally to all cultures or situations.
Herissone-Kelly argues that none of these strategies are fully satisfactory. Refining or prioritizing principles risks distorting their original meaning or introducing arbitrariness. Supplementing them can lead to an unwieldy and complex framework. Limiting their scope undermines the theory's claim to universality.

He concludes that the most viable approach is to adopt a non-realist view of the common morality's principles. This means understanding them not as objective moral facts but as flexible tools for ethical reflection and deliberation, open to interpretation and adaptation in different contexts. While this may seem to weaken the theory's authority, Herissone-Kelly argues that it allows for a more nuanced and practical application of ethical principles in a diverse world.

Saturday, March 2, 2024

Unraveling the Mindset of Victimhood

Scott Barry Kaufman
Scientific American
Originally posted 29 June 2020

Here is an excerpt:

Constantly seeking recognition of one’s victimhood. Those who score high on this dimension have a perpetual need to have their suffering acknowledged. In general, this is a normal psychological response to trauma. Experiencing trauma tends to “shatter our assumptions” about the world as a just and moral place. Recognition of one’s victimhood is a normal response to trauma and can help reestablish a person’s confidence in their perception of the world as a fair and just place to live.

Also, it is normal for victims to want the perpetrators to take responsibility for their wrongdoing and to express feelings of guilt. Studies conducted on testimonies of patients and therapists have found that validation of the trauma is important for therapeutic recovery from trauma and victimization (see here and here).

A sense of moral elitism. Those who score high on this dimension perceive themselves as having an immaculate morality and view everyone else as being immoral. Moral elitism can be used to control others by accusing others of being immoral, unfair or selfish, while seeing oneself as supremely moral and ethical.

Moral elitism often develops as a defense mechanism against deeply painful emotions and as a way to maintain a positive self-image. As a result, those under distress tend to deny their own aggressiveness and destructive impulses and project them onto others. The “other” is perceived as threatening whereas the self is perceived as persecuted, vulnerable and morally superior.


Here is a summary:

Kaufman explores the concept of "interpersonal victimhood," a tendency to view oneself as the repeated target of unfair treatment by others. He identifies several key characteristics of this mindset, including:
  • Belief in inherent unfairness: The conviction that the world is fundamentally unjust and that one is disproportionately likely to experience harm.
  • Moral self-righteousness: The perception of oneself as more ethical and deserving of good treatment compared to others.
  • Rumination on past injustices: Dwelling on and replaying negative experiences, often with feelings of anger and resentment.
  • Difficulty taking responsibility: Attributing negative outcomes to external factors rather than acknowledging one's own role.
Kaufman argues that while acknowledging genuine injustices is important, clinging to a victimhood identity can be detrimental. It can hinder personal growth, strain relationships, and fuel negativity. He emphasizes the importance of developing a more balanced perspective, acknowledging both external challenges and personal agency. The article offers strategies for fostering resilience

Friday, February 23, 2024

How Did Polyamory Become So Popular?

Jennifer Wilson
The New Yorker
Originally posted 25 Dec 23

Here is an excerpt:

What are all these open couples, throuples, and polycules suddenly doing in the culture, besides one another? To some extent, art is catching up with life. Fifty-one per cent of adults younger than thirty told Pew Research, in 2023, that open marriage was “acceptable,” and twenty per cent of all Americans report experimenting with some form of non-monogamy. The extramarital “entanglements” of Will and Jada Pinkett Smith have been tabloid fodder for the past two years. (Pinkett Smith once clarified that their marriage is not “open”; rather, it is a “relationship of transparency.”) In 2020, the reality show “House Hunters,” on HGTV, saw a throuple trying to find their dream home—one with a triple-sink vanity. The same year, the city of Somerville, Massachusetts, allowed domestic partnerships to be made up of “two or more” people.

Some, like the sex therapist (and author of “Open Monogamy, A Guide to Co-Creating Your Ideal Relationship Agreement,” 2021), Tammy Nelson, have attributed the acceptance of a greater number of partners to pandemic-born domestic ennui; after being stuck with one person all day every day, the thinking goes, couples are ready to open up more than their pods. Nelson is part of a cohort of therapists, counsellors, and advice writers, including Esther Perel and the “Savage Love” columnist Dan Savage, who are encouraging married couples to think more flexibly about monogamy. Their advice has found an eager audience among the well-heeled attendees of the “ideas festival” circuit, featured in talks at Google, SXSW, and the Aspen Institute.

The new monogamy skepticism of the moneyed gets some screen time in the pandemic-era breakout hit “The White Lotus.” The show mocks the leisure class as they mope around five-star resorts in Hawaii and Sicily, stewing over love, money, and the impossibility, for people in their tax bracket, of separating the two. In the latest season, Ethan (Will Sharpe) and Harper (Aubrey Plaza) are an attractive young couple stuck in a sexless marriage—until, that is, they go on vacation with the monogamish Cameron (Theo James) and Daphne (Meghann Fahy). After Cameron and Harper have some unaccounted-for time together in a hotel room, Ethan tracks down an unbothered Daphne, lounging on the beach, to share his suspicion that something has happened between their spouses. Some momentary concern on Daphne’s face quickly morphs—in a devastatingly subtle performance by Fahy—into a sly smile. “A little mystery? It’s kinda sexy,” she assures Ethan, before luring him into a seaside cove. That night Ethan and Harper have sex, the wounds of their marriage having been healed by a little something on the side.


Here is my summary:

The article discusses the increasing portrayal and acceptance of non-monogamous relationships in contemporary culture, particularly in literature, cinema, and television. It notes that open relationships, throuples, and polyamorous arrangements are gaining prominence, reflecting changing societal attitudes. The author cites statistics and cultural examples, including a Gucci perfume ad and a plot twist in the TV series "Riverdale." The rise of non-monogamy is linked to a broader shift in societal norms, with some attributing it to pandemic-related ennui and a desire for more flexibility in relationships. The text also delves into the historical roots of polyamory, mentioning the Kerista movement and its adaptation to conservative times in the 1980s. The author concludes by expressing a desire for a more inclusive and equitable representation of polyamory, critiquing the limited perspective presented in a specific memoir discussed in the text.

Saturday, February 17, 2024

What Stops People From Standing Up for What’s Right?

Julie Sasse
Greater Good
Originally published 17 Jan 24

Here is an excerpt:

How can we foster moral courage?

Every person can try to become more morally courageous. However, it does not have to be a solitary effort. Instead, institutions such as schools, companies, or social media platforms play a significant role. So, what are concrete recommendations to foster moral courage?
  • Establish and strengthen social and moral norms: With a solid understanding of what we consider right and wrong, it becomes easier to detect wrongdoings. Institutions can facilitate this process by identifying and modeling fundamental values. For example, norms and values expressed by teachers can be important points of reference for children and young adults.
  • Overcome uncertainty: If it is unclear whether someone’s behavior is wrong, witnesses should feel comfortable to inquire, for example, by asking other bystanders how they judge the situation or a potential victim whether they are all right.
  • Contextualize anger: In the face of wrongdoings, anger should not be suppressed since it can provide motivational fuel for intervention. Conversely, if someone expresses anger, it should not be diminished as irrational but considered a response to something unjust. 
  • Provide and advertise reporting systems: By providing reporting systems, institutions relieve witnesses from the burden of selecting and evaluating individual means of intervention and reduce the need for direct confrontation.
  • Show social support: If witnesses directly confront a perpetrator, others should be motivated to support them to reduce risks.
We see that there are several ways to make moral courage less difficult, but they do require effort from individuals and institutions. Why is that effort worth it? Because if more individuals are willing and able to show moral courage, more wrongdoings would be addressed and rectified—and that could help us to become a more responsible and just society.


Main points:
  • Moral courage is the willingness to stand up for what's right despite potential risks.
  • It's rare because of various factors like complexity of the internal process, situational barriers, and difficulty seeing the long-term benefits.
  • Key stages involve noticing a wrongdoing, interpreting it as wrong, feeling responsible, believing in your ability to intervene, and accepting potential risks.
  • Personality traits and situational factors influence these stages.

Monday, February 12, 2024

Will AI ever be conscious?

Tom McClelland
Clare College
Unknown date of post

Here is an excerpt:

Human consciousness really is a mysterious thing. Cognitive neuroscience can tell us a lot about what’s going on in your mind as you read this article - how you perceive the words on the page, how you understand the meaning of the sentences and how you evaluate the ideas expressed. But what it can’t tell us is how all this comes together to constitute your current conscious experience. We’re gradually homing in on the neural correlates of consciousness – the neural patterns that occur when we process information consciously. But nothing about these neural patterns explains what makes them conscious while other neural processes occur unconsciously. And if we don’t know what makes us conscious, we don’t know whether AI might have what it takes. Perhaps what makes us conscious is the way our brain integrates information to form a rich model of the world. If that’s the case, an AI might achieve consciousness by integrating information in the same way. Or perhaps we’re conscious because of the details of our neurobiology. If that’s the case, no amount of programming will make an AI conscious. The problem is that we don’t know which (if either!) of these possibilities is true.

Once we recognise the limits of our current understanding, it looks like we should be agnostic about the possibility of artificial consciousness. We don’t know whether AI could have conscious experiences and, unless we crack the problem of consciousness, we never will. But here’s the tricky part: when we start to consider the ethical ramifications of artificial consciousness, agnosticism no longer seems like a viable option. Do AIs deserve our moral consideration? Might we have a duty to promote the well-being of computer systems and to protect them from suffering? Should robots have rights? These questions are bound up with the issue of artificial consciousness. If an AI can experience things then it plausibly ought to be on our moral radar.

Conversely, if an AI lacks any subjective awareness then we probably ought to treat it like any other tool. But if we don’t know whether an AI is conscious, what should we do?

The info is here, and a book promotion too.

Sunday, January 14, 2024

Google is Free: Moral Evaluations of Intergroup Curiosity

Mosley, A. J., & Solomon, L. H. (2023).
Personality and Social Psychology Bulletin, 0(0).

Abstract

Two experiments investigated how evaluations of intergroup curiosity differed depending on whether people placed responsibility for their learning on themselves or on outgroup members. In Study 1, participants (n = 340; 51% White-American, 49% Black-American) evaluated White actors who were curious about Black culture and placed responsibility on outgroup members to teach versus on themselves to learn. Both Black and White participants rated the latter actors as more moral, and perceptions of effort mediated this effect. A follow-up preregistered study (n = 513; 75% White-American) asked whether perceptions of greater effort cause greater perceptions of moral goodness. Replicating Study 1, participants rated actors as more moral when they placed responsibility on themselves versus others. Participants also rated actors as more moral when they exerted high versus low effort. These results clarify when and why participants view curiosity as morally good and help to strengthen bridges between work on curiosity, moral cognition, and intergroup relations.


Here is my summary:

The researchers found that people evaluate intergroup curiosity more favorably when they perceive that the curious individual is placing responsibility on themselves to learn rather than on the outgroup to teach. The researchers also found that perceptions of effort mediate this effect, such that people view curious individuals who exert greater effort as more moral. These findings suggest that people view intergroup curiosity as more morally good when they perceive that the curious individual is taking responsibility for their own learning and is putting in the effort to understand the outgroup.

Thursday, January 11, 2024

The paucity of morality in everyday talk

Atari, M., Mehl, M.R., Graham, J. et al. 
Sci Rep 13, 5967 (2023).

Abstract

Given its centrality in scholarly and popular discourse, morality should be expected to figure prominently in everyday talk. We test this expectation by examining the frequency of moral content in three contexts, using three methods: (a) Participants’ subjective frequency estimates (N = 581); (b) Human content analysis of unobtrusively recorded in-person interactions (N = 542 participants; n = 50,961 observations); and (c) Computational content analysis of Facebook posts (N = 3822 participants; n = 111,886 observations). In their self-reports, participants estimated that 21.5% of their interactions touched on morality (Study 1), but objectively, only 4.7% of recorded conversational samples (Study 2) and 2.2% of Facebook posts (Study 3) contained moral content. Collectively, these findings suggest that morality may be far less prominent in everyday life than scholarly and popular discourse, and laypeople, presume.

Summary

Overall, the findings of this research suggest that morality is far less prevalent in everyday talk than previously assumed. While participants overestimated the frequency of moral content in their self-reports, objective measures revealed that moral topics are relatively rare in everyday conversations and online interactions.

The study's authors propose several explanations for this discrepancy between subjective and objective findings. One possibility is that people tend to remember instances of moral talk more vividly than other types of conversation. Additionally, people may be more likely to report that they engage in moral talk when they are explicitly asked about it, as this may make them more aware of their own moral values.

Regardless of the underlying reasons, the findings of this research suggest that morality is not as prominent in everyday life as is often assumed. This may have implications for how we understand and promote moral behavior in society.

Saturday, December 16, 2023

Older people are perceived as more moral than younger people: data from seven culturally diverse countries

Piotr Sorokowski, et al. (2023)
Ethics & Behavior,
DOI: 10.1080/10508422.2023.2248327

Abstract

Given the adage “older and wiser,” it seems justified to assume that older people may be stereotyped as more moral than younger people. We aimed to study whether assessments of a person’s morality differ depending on their age. We asked 661 individuals from seven societies (Australians, Britons, Burusho of Pakistan, Canadians, Dani of Papua, New Zealanders, and Poles) whether younger (~20-year-old), middle-aged (~40-year-old), or older (~60-year-old) people were more likely to behave morally and have a sense of right and wrong. We observed that older people were perceived as more moral than younger people. The effect was particularly salient when comparing 20-year-olds to either 40- or 60-year-olds and was culturally universal, as we found it in both WEIRD (i.e. Western, Educated, Industrialized, Rich, Democratic) and non-WEIRD societies.


Here is my summary:

The researchers found that older people were rated as more moral than younger people, and this effect was particularly strong when comparing 20-year-olds to either 40- or 60-year-olds. The effect was also consistent across cultures, suggesting that it is a universal phenomenon.

The researchers suggest that there are a few possible explanations for this finding. One possibility is that older people are simply seen as having more life experience and wisdom, which are both associated with morality. Another possibility is that older people are more likely to conform to social norms, which are often seen as being moral. Finally, it is also possible that people simply have a positive bias towards older people, which leads them to perceive them as being more moral.

Whatever the explanation, the finding that older people are perceived as more moral than younger people has a number of implications. For example, it suggests that older people may be more likely to be trusted and respected, and they may also be more likely to be seen as leaders. Additionally, the finding suggests that ageism may be a form of prejudice, as it involves making negative assumptions about people based on their age.

Wednesday, December 6, 2023

People are increasingly following their heart and not the Bible - poll

Ryan Foley
Christian Today
Originally published 2 DEC 23

A new study reveals that less than one-third of Americans believe the Bible should serve as the foundation for determining right and wrong, even as most people express support for traditional moral values.

The fourth installment of the America's Values Study, released by the Cultural Research Center at Arizona Christian University Tuesday, asked respondents for their thoughts on traditional moral values and what they would like to see as "America's foundation for determining right and wrong." The survey is based on responses from 2,275 U.S. adults collected in July 2022.

Overall, when asked to identify what they viewed as the primary determinant of right and wrong in the U.S., a plurality of participants (42%) said: "what you feel in your heart." An additional 29% cited majority rule as their desired method for determining right and wrong, while just 29% expressed a belief that the principles laid out in the Bible should determine the understanding of right and wrong in the U.S. That figure rose to 66% among Spiritually Active, Governance Engaged Conservative Christians.

The only other demographic subgroups where at least a plurality of respondents indicated a desire for the Bible to serve as the determinant of right and wrong in the U.S. were respondents who attend an evangelical church (62%), self-described Republicans (57%), theologically defined born-again Christians (54%), self-identified conservatives (49%), those who are at least 50 years of age (39%), members of all Protestant congregations (39%), self-identified Christians (38%) and those who attend mainline Protestant churches (36%).

By contrast, an outright majority of respondents who do not identify with a particular faith at all (53%), along with half of LGBT respondents (50%), self-described moderates (47%), political independents (47%), Democrats (46%), self-described liberals (46%) and Catholic Church attendees (46%) maintained that "what you feel in your heart" should form the foundation of what Americans view as right and wrong.

Sunday, November 26, 2023

How robots can learn to follow a moral code

Neil Savage
Nature.com
Originally posted 26 OCT 23

Here is an excerpt:

Defining ethics

The ability to fine-tune an AI system’s behaviour to promote certain values has inevitably led to debates on who gets to play the moral arbiter. Vosoughi suggests that his work could be used to allow societies to tune models to their own taste — if a community provides examples of its moral and ethical values, then with these techniques it could develop an LLM more aligned with those values, he says. However, he is well aware of the possibility for the technology to be used for harm. “If it becomes a free for all, then you’d be competing with bad actors trying to use our technology to push antisocial views,” he says.

Precisely what constitutes an antisocial view or unethical behaviour, however, isn’t always easy to define. Although there is widespread agreement about many moral and ethical issues — the idea that your car shouldn’t run someone over is pretty universal — on other topics there is strong disagreement, such as abortion. Even seemingly simple issues, such as the idea that you shouldn’t jump a queue, can be more nuanced than is immediately obvious, says Sydney Levine, a cognitive scientist at the Allen Institute. If a person has already been served at a deli counter but drops their spoon while walking away, most people would agree it’s okay to go back for a new one without waiting in line again, so the rule ‘don’t cut the line’ is too simple.

One potential approach for dealing with differing opinions on moral issues is what Levine calls a moral parliament. “This problem of who gets to decide is not just a problem for AI. It’s a problem for governance of a society,” she says. “We’re looking to ideas from governance to help us think through these AI problems.” Similar to a political assembly or parliament, she suggests representing multiple different views in an AI system. “We can have algorithmic representations of different moral positions,” she says. The system would then attempt to calculate what the likely consensus would be on a given issue, based on a concept from game theory called cooperative bargaining.


Here is my summary:

Autonomous robots will need to be able to make ethical decisions in order to safely and effectively interact with humans and the world around them.

The article proposes a number of ways that robots can be taught to follow a moral code. One approach is to use supervised learning, in which robots are trained on a dataset of moral dilemmas and their corresponding solutions. Another approach is to use reinforcement learning, in which robots are rewarded for making ethical decisions and punished for making unethical decisions.

The article also discusses the challenges of teaching robots to follow a moral code. One challenge is that moral codes are often complex and nuanced, and it can be difficult to define them in a way that can be understood by a robot. Another challenge is that moral codes can vary across cultures, and it is important to develop robots that can adapt to different moral frameworks.

The article concludes by arguing that teaching robots to follow a moral code is an important ethical challenge that we need to address as we develop more sophisticated artificial intelligence systems.

Thursday, November 9, 2023

Moral Future-Thinking: Does the Moral Circle Stand the Test of Time?

Law, K. F., Syropoulos, S., et al. (2023, August 10). 
PsyArXiv

Abstract

The long-term collective welfare of humanity may lie in the hands of those who are presently living. But do people normatively include future generations in their moral circles? Across four studies conducted on Prolific Academic (N Total=823), we find evidence for a progressive decline in the subjective moral standing of future generations, demonstrating decreasing perceived moral obligation, moral concern, and prosocial intentions towards other people with increasing temporal distance. While participants generally tend to display present-oriented moral preferences, we also reveal individual differences that mitigate this tendency and predict pro-future outcomes, including individual variation in longtermism beliefs and the vividness of one’s imagination. Our studies reconcile conflicting evidence in the extant literature on moral judgment and future-thinking, shed light on the role of temporal distance in moral circle expansion, and offer practical implications for better valuing and safeguarding the shared future of humanity.

Here's my summary:

This research investigates whether people normatively include future generations in their moral circles. The authors conducted four studies with a total of 823 participants, and found evidence for a progressive decline in the subjective moral standing of future generations with increasing temporal distance. This suggests that people generally tend to display present-oriented moral preferences.

However, the authors also found individual differences that mitigate this tendency and predict pro-future outcomes. These factors include individual variation in longtermism beliefs and the vividness of one's imagination. The authors also found that people are more likely to include future generations in their moral circles when they are primed to think about them or when they are asked to consider the long-term consequences of their actions.

The authors' findings reconcile conflicting evidence in the extant literature on moral judgment and future-thinking. They also shed light on the role of temporal distance in moral circle expansion and offer practical implications for better valuing and safeguarding the shared future of humanity.

Overall, the research paper provides evidence that people generally tend to prioritize the present over the future when making moral judgments. However, the authors also identify individual factors and contextual conditions that can promote moral future-thinking. These findings could be used to develop interventions that encourage people to consider the long-term consequences of their actions and to take steps to protect the well-being of future generations.

Wednesday, October 25, 2023

The moral psychology of Artificial Intelligence

Bonnefon, J., Rahwan, I., & Shariff, A.
(2023, September 22). 

Abstract

Moral psychology was shaped around three categories of agents and patients: humans, other animals, and supernatural beings. Rapid progress in Artificial Intelligence has introduced a fourth category for our moral psychology to deal with: intelligent machines. Machines can perform as moral agents, making decisions that affect the outcomes of human patients, or solving moral dilemmas without human supervi- sion. Machines can be as perceived moral patients, whose outcomes can be affected by human decisions, with important consequences for human-machine cooperation. Machines can be moral proxies, that human agents and patients send as their delegates to a moral interaction, or use as a disguise in these interactions. Here we review the experimental literature on machines as moral agents, moral patients, and moral proxies, with a focus on recent findings and the open questions that they suggest.

Conclusion

We have not addressed every issue at the intersection of AI and moral psychology. Questions about how people perceive AI plagiarism, about how the presence of AI agents can reduce or enhance trust between groups of humans, about how sexbots will alter intimate human relations, are the subjects of active research programs.  Many more yet unasked questions will only be provoked as new AI  abilities  develops. Given the pace of this change, any review paper will only be a snapshot.  Nevertheless, the very recent and rapid emergence of AI-driven technology is colliding with moral intuitions forged by culture and evolution over the span of millennia.  Grounding an imaginative speculation about the possibilities of AI with a thorough understanding of the structure of human moral psychology will help prepare for a world shared with, and complicated by, machines.

Friday, October 20, 2023

Competition and moral behavior: A meta-analysis of forty-five crowd-sourced experimental designs

Huber, C., Dreber, A., et al. (2023).
PNAS of the United States of America, 120(23).

Abstract

Does competition affect moral behavior? This fundamental question has been debated among leading scholars for centuries, and more recently, it has been tested in experimental studies yielding a body of rather inconclusive empirical evidence. A potential source of ambivalent empirical results on the same hypothesis is design heterogeneity—variation in true effect sizes across various reasonable experimental research protocols. To provide further evidence on whether competition affects moral behavior and to examine whether the generalizability of a single experimental study is jeopardized by design heterogeneity, we invited independent research teams to contribute experimental designs to a crowd-sourced project. In a large-scale online data collection, 18,123 experimental participants were randomly allocated to 45 randomly selected experimental designs out of 95 submitted designs. We find a small adverse effect of competition on moral behavior in a meta-analysis of the pooled data. The crowd-sourced design of our study allows for a clean identification and estimation of the variation in effect sizes above and beyond what could be expected due to sampling variance. We find substantial design heterogeneity—estimated to be about 1.6 times as large as the average standard error of effect size estimates of the 45 research designs—indicating that the informativeness and generalizability of results based on a single experimental design are limited. Drawing strong conclusions about the underlying hypotheses in the presence of substantive design heterogeneity requires moving toward much larger data collections on various experimental designs testing the same hypothesis.

Significance

Using experiments involves leeway in choosing one out of many possible experimental designs. This choice constitutes a source of uncertainty in estimating the underlying effect size which is not incorporated into common research practices. This study presents the results of a crowd-sourced project in which 45 independent teams implemented research designs to address the same research question: Does competition affect moral behavior? We find a small adverse effect of competition on moral behavior in a meta-analysis involving 18,123 experimental participants. Importantly, however, the variation in effect size estimates across the 45 designs is substantially larger than the variation expected due to sampling errors. This “design heterogeneity” highlights that the generalizability and informativeness of individual experimental designs are limited.

Here are some of the key takeaways from the research:
  • Competition can have a small, but significant, negative effect on moral behavior.
  • This effect is likely due to the fact that competition can lead to people being more self-interested and less concerned about the well-being of others.
  • The findings of this research have important implications for our understanding of how competition affects moral behavior.