Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Morality. Show all posts
Showing posts with label Morality. Show all posts

Tuesday, June 11, 2024

Morals Versus Ethics: Building An Organizational Culture Of Trust And Transparency

Pamela Furr
Forbes.com
Originally posted 6 May 24

Here are two excerpts:

Prioritize Transparency And Integrity

Our team is a diverse mix of ages, cultures, races and backgrounds, and we all bring unique experiences and perspectives to the table. If a colleague says or does something that doesn’t sit right with you, take a moment to pause, process and then approach them. Share how you felt in the moment—this can be as simple as saying, “My feelings were hurt when you did that” or “I didn’t think the language you used earlier was appropriate.” Give them the opportunity to explain or apologize before gossiping with coworkers or silently holding onto resentments. Trust each other to have open, honest conversations, and you can often defuse conflicts before they escalate.

(cut)

Build A Sense Of Community

Set the tone for open dialogue and mutual respect in your organization. By modeling these values in your interactions with others, you can inspire your team to uphold the same standards. Foster a culture in which you advocate for yourself and others and try to learn from others as well. Approach things you don’t understand with a spirit of curiosity and compassion, assuming positive intent until proven otherwise. Ask questions, and truly seek to understand someone else’s point of view.

I believe that an essential part of being a leader is ensuring that our employees feel safe, protected and heard when they come to work. We can work to hold external governing boards accountable to the standards they set, but we can also do everything in our power to create a culture of trust, transparency and accountability within our own organizations.


Here is my summary:

The article discusses the difference between morals and ethics. Morals are personal beliefs and values that guide our actions, while ethics are a set of rules established by a community or governing body.

The author describes a situation where a trainee made a false sexual harassment claim against her mentor. The certifying board refused to take any action because they saw it as an employment contract issue. The author argues that governing boards should take a stronger stance in upholding ethics within their professions.

The article concludes with the author's thoughts on creating an ethical and transparent workplace culture. The author emphasizes the importance of open communication, understanding policies and procedures, and building a sense of community. By following these principles, organizations can create a safe and supportive environment for their employees.

Wednesday, June 5, 2024

Evangelical literary tradition and moral foundations theory

Christopher Douglas
The Journal of American Culture
Originally published 26 Feb 24

Here is an excerpt:

What can MFT tell us about the topography of evangelical ethics as displayed in its bestselling fiction of the last 20 years? In many ways, there is nothing surprising in these findings. As Haidt himself suggests, the five primary foundations discernably track onto political orientations, with conservatives balancing all five criteria but liberals prioritizing care and fairness (as equality): “it's not just members of traditional societies who draw on all five foundations; even within Western societies, we consistently find an ideological effect in which religious and cultural conservatives value and rely upon all five foundations, whereas liberals value and rely upon the harm and fairness foundations primarily” (Haidt, 2007, 1001). Or, in updated form: “Liberals have a three-foundation morality, whereas conservatives use all six” (Haidt, 2012, 214). The Shack seems to aptly confirm this insight, prioritizing care, fairness-as-justice, and egalitarianism at the expense of loyalty, authority, and purity. These values reflect the author's liberal sensibilities that were suggested when Young tweeted criticism of Donald Trump after the Access Hollywood tapes were released (Douglas, 2020, 508n3). LaHaye's conservative credentials, meanwhile, are well known—early partner to Jerry Falwell in the formation of the Moral Majority, fundraiser for the Institute for Creation Research, and so on—and the Left Behind series suggests a mix of moral foundations that does not so much find a balance among all six foundations (as Haidt discovered seems to be true of “Very Conservatives”) as express a sort of Extremely Conservative sensibility. The Shack and the Left Behind series reflect the considerable range of white evangelical politics, but also reflect the fact that white evangelicals tilt heavily conservative, forming the most important demographic of the Republican base, voting for Donald Trump by 77 and 84% in 2016 and 2020, respectively (Igielnik et al., 2021).


Here is my summary:

The article explores the moral foundations of two evangelical best-selling novels: The Shack by William Paul Young and Left Behind by Tim LaHaye and Jerry Jenkins. It uses Moral Foundations Theory (MFT) to analyze how these seemingly very different novels prioritize different moral values.

Moral Foundations Theory (MFT) identifies five core moral foundations:
  • Care/Harm: Protecting others from harm and promoting their well-being.
  • Fairness/Cheating: Ensuring that people are treated justly and receive what they deserve.
  • Loyalty/Betrayal: Standing by your group and upholding your commitments.
  • Authority/Subversion: Respecting legitimate authority figures and hierarchies.
  • Sanctity/Degradation: Purity, avoiding disgust and respecting the sacred.
The Shack by William Paul Young grapples with the kidnapping, abuse, and murder of a child. It focuses on the themes of care/harm and fairness. The protagonist, Mack, wrestles with how God could allow such a tragedy to occur and how fairness can be achieved. The novel explores the idea of forgiveness and reconciliation.

Left Behind by Tim LaHaye and Jerry Jenkins is a series about the Rapture and the End Times. It emphasizes the moral foundations of loyalty/betrayal, authority/subversion, and sanctity/degradation. The series depicts a world where good and evil are clearly defined and a battle between God and the Antichrist is about to unfold. The in-group of Christians is loyal to God and resists the authority of the Antichrist. The series emphasizes the importance of following God's will and upholding Christian values.

The article argues that MFT helps explain the enduring appeal of these novels.  The Shack resonates with readers who seek comfort and answers in the face of tragedy. Left Behind appeals to readers who feel like they are part of an embattled community and who believe in a clear distinction between good and evil.

Monday, June 3, 2024

Morality in the Anthropocene: The Perversion of Compassion and Punishment in the Online World

Robertson, C., Shariff, A., & Van Bavel, J. J.
(2024, February 4).

Abstract

Although much of human morality evolved in an environment of small group living, almost six billion people use the internet in the modern era. We argue that the technological transformation has created an entirely new ecosystem that is often mismatched with our evolved adaptations for social living. We discuss how evolved responses to moral transgressions, such as compassion for victims of transgressions and punishment of transgressors, are disrupted by two main features of the online context. First, the scale of the internet exposes us to an unnaturally large quantity of extreme moral content, causing compassion fatigue and increasing public shaming. Second, the physical and psychological distance between moral actors online can lead to ineffective collective action and virtue signaling. We discuss practical implications of these mismatches and suggest directions for future research on morality in the internet era.

Significance Statement

Morality evolved when people lived in small, close-knit groups. Evolved responses to moral conflict, like compassion for the victim and punishment for the transgressor, had adaptive benefits. However, the internet has created a new ecosystem for human sociality changing morality in two important ways. First, the scale of the internet exposes people to unnaturally large quantities of extreme moral content. Second, people’s responses to moral transgressions are not beneficial in large, distal social groups. These mismatches can lead to compassion fatigue, ineffective collective action, public shaming, and virtue signaling.


Here is my summary:

The research discusses how the internet has transformed human morality by creating a new ecosystem that often conflicts with our evolved social adaptations. It highlights that the scale and nature of online interactions lead to compassion fatigue, public shaming, and ineffective collective action. The research emphasizes how evolved responses to moral conflicts, like compassion for victims and punishment for transgressors, are disrupted online due to the vast exposure to extreme moral content and the distance between moral actors. It delves into the evolutionary underpinnings of moral cognition, explaining how humans' innate behaviors related to social interactions have shaped morality. Furthermore, it explores how the internet exposes people to overabundance and extremity of moral content, triggering maladaptive responses like heightened outrage and hostility. The research also examines how online environments distort people's prosocial reactions to morality, leading to challenges in expressing genuine compassion, empathy, and effective third-party punishment.

Wednesday, May 29, 2024

Moral Hypocrisy: Social Groups and the Flexibility of Virtue

Robertson, C., Akles, M., & Van Bavel, J. J.
(2024, March 19).

Abstract

The tendency for people to consider themselves morally good while behaving selfishly is known as “moral hypocrisy.” Influential work by Valdesolo & DeSteno (2007) found evidence for intergroup moral hypocrisy, such that people are more forgiving of transgressions when they were committed by an in-group member than an out-group member. We conducted two experiments to examine moral hypocrisy and group membership in an online paradigm with Prolific Workers from the US: a direct replication of the original work with minimal groups (N = 610, nationally representative) and a conceptual replication with political groups (N = 606, 50% Democrat and 50% Republican). Although the results did not replicate the original findings, we observed evidence of in-group favoritism in minimal groups and out-group derogation in political groups. The current research finds mixed evidence of intergroup moral hypocrisy and has implications for understanding the contextual dependencies of intergroup bias and partisanship.

Statement of Relevance

Social identities and group memberships influence social judgment and decision-making. Prior research found that social identity influences moral decision making, such that people are more likely to forgive moral transgressions perpetrated by their in-group members than similar transgressions from out-group members (Valdesolo & DeSteno, 2007). The present research sought to replicate this pattern of intergroup moral hypocrisy using minimal groups (mirroring the original research) and political groups. Although we were unable to replicate the findings from the original paper, we found that people who are highly identified with their minimal group exhibited in-group favoritism, and partisans exhibited out-group derogation. This work contributes both to open science replication efforts, and to the literature on moral hypocrisy and intergroup relations.

Monday, May 27, 2024

When the specter of the past haunts current groups: Psychological antecedents of historical blame

Vallabha, S., Doriscar, J., & Brandt, M. J. (in press)
Journal of Personality and Social Psychology.
Most recent modification 2 Jan 24

Abstract

Groups have committed historical wrongs (e.g., genocide, slavery). We investigated why people blame current groups who were not involved in the original historical wrong for the actions of their predecessors who committed these wrongs and are no longer alive.  Current models of individual and group blame overlook the dimension of time and therefore have difficulty explaining this phenomenon using their existing criteria like causality, intentionality, or preventability. We hypothesized that factors that help psychologically bridge the past and present, like perceiving higher (i) connectedness between past and present perpetrator groups, (ii) continued privilege of perpetrator groups, (iii) continued harm of victim groups, and (iv) unfulfilled forward obligations of perpetrator groups would facilitate higher blame judgements against current groups for the past. In two repeated-measures surveys using real events (N1 = 518, N2 = 495) and two conjoint experiments using hypothetical events (N3 = 598, N4 = 605), we find correlational and causal evidence for our hypotheses. These factors link present groups to their past and cause more historical blame and support for compensation policies. This brings the dimension of time into theories of blame, uncovers overlooked criteria for blame judgements, and questions the assumptions of existing blame models. Additionally, it helps us understand the psychological processes undergirding intergroup relations and historical narratives mired in historical conflict. Our work provides psychological insight into the debates on intergenerational justice by suggesting methods people can use to ameliorate the psychological legacies of historical wrongs and atrocities.

(cut)

General Discussion

We tested four factors of blame towards current groups for their historical wrongs. We found correlational and causal evidence for our hypothesized factors across a broad range of hypothetical and real events. We found that when people perceive current perpetrator group to have connectedness with their past, the current victim group to be suffering due to past harm, the current perpetrator group to be benefiting from past harm, and the current perpetrator groupto have not fulfilled their obligations to remedy the wrong, historical blame judgements towards the current perpetrator groups are higher. On the whole, this was consistent across the location of the event (whether the participant was judging a historical American event or a historical non-American event), the group membership of the participant (whether the participant belonged to the victim or perpetrator group or neither/privileged or marginalized group), the ideology of the participant (whether the participant identified as a liberal or conservative), and the age of the participants. We also found that these factors were causally associated with behavioral intention, such as support for compensation to victim groups. Finally, we also found that historical blame attribution might mediate the effect of the key factors on support for compensation to victim groups. The four psychological factors that we identified as antecedents to perceptions of historical blame all help psychologically bridge the past and present. These factors provide psychological links between the past and present groups, in their characteristics (connectedness), outcomes (harm/benefit), and actions (unfulfilled obligations).

Sunday, May 26, 2024

A Large-Scale Investigation of Everyday Moral Dilemmas

Yudkin, D. A., Goodwin, G., et al. (2023, July 11).

Abstract

Questions of right and wrong are central to daily life, yet how people experience everyday moral dilemmas remains uncertain. We combined state-of-the-art tools in machine learning with survey-based methods in psychology to analyze a massive online English-language repository of everyday moral dilemmas. In 369,161 descriptions (“posts”) and 11M evaluations (“comments”) of moral dilemmas extracted from Reddit’s “Am I the Asshole?” forum (AITA), users described a wide variety of everyday dilemmas, ranging from broken promises to privacy violations. Dilemmas involving the under-investigated topic of relational obligations were the most frequently reported, while those pertaining to honesty were the most widely condemned. The types of dilemmas people experienced depended on the interpersonal closeness of the interactants, with some dilemmas (e.g., politeness) being more prominent in distant-other interactions, and others (e.g., relational transgressions) more prominent in close-other interactions. A longitudinal investigation showed that shifts in social interactions prompted by the “shock” event of the global pandemic resulted in predictable shifts in the types of moral dilemmas that people encountered. A preregistered study using a census-stratified representative sample of the US population (N = 510), as well as other robustness tests, suggest our findings generalize beyond the sample of Reddit users. Overall, by leveraging a unique large dataset and new techniques for exploring this dataset, our paper highlights the diversity of moral dilemmas experienced in daily life, and helps to build a moral psychology grounded in the vagaries of everyday experience.

Significance Statement

People often wonder if what they did or said was right or wrong. In this paper we leveraged a massive online repository of descriptions of everyday moral situations, along with new methods in natural language processing, to explore a number of questions about how people experience and evaluate these moral dilemmas. Our results highlight just how often daily moral experiences concern questions about our responsibilities to friends, neighbors, and family. They also reveal the extent to which such experiences can change according to people’s social context—including large-scale social changes like the COVID-19 pandemic.


My take: 

This study may be very important to clinical psychologists. It provides insights into the diversity and prevalence of everyday moral dilemmas that people encounter in their daily lives.

Clinical psychologists often work with clients to navigate complex moral and interpersonal situations, so understanding the common types of dilemmas people face is valuable.  The study shows that dilemmas involving relational obligations are the most frequently reported, with honesty and betrayal as major themes.  This suggests that clinical work should pay close attention to how clients navigate moral issues within their close relationships and the importance they place on honesty.

Friday, May 17, 2024

Moral universals: A machine-reading analysis of 256 societies

Alfano, M., Cheong, M., & Curry, O. S. (2024).
Heliyon, 10(6).
doi.org/10.1016/j.heliyon.2024.e25940 

Abstract

What is the cross-cultural prevalence of the seven moral values posited by the theory of “morality-as-cooperation”? Previous research, using laborious hand-coding of ethnographic accounts of ethics from 60 societies, found examples of most of the seven morals in most societies, and observed these morals with equal frequency across cultural regions. Here we replicate and extend this analysis by developing a new Morality-as-Cooperation Dictionary (MAC-D) and using Linguistic Inquiry and Word Count (LIWC) to machine-code ethnographic accounts of morality from an additional 196 societies (the entire Human Relations Area Files, or HRAF, corpus). Again, we find evidence of most of the seven morals in most societies, across all cultural regions. The new method allows us to detect minor variations in morals across region and subsistence strategy. And we successfully validate the new machine-coding against the previous hand-coding. In light of these findings, MAC-D emerges as a theoretically-motivated, comprehensive, and validated tool for machine-reading moral corpora. We conclude by discussing the limitations of the current study, as well as prospects for future research.

Significance statement

The empirical study of morality has hitherto been conducted primarily in WEIRD contexts and with living participants. This paper addresses both of these shortcomings by examining the global anthropological record. In addition, we develop a novel methodological tool, the morality-as-cooperation dictionary, which makes it possible to use natural language processing to extract a moral signal from text. We find compelling evidence that the seven moral elements posited by the morality-as-cooperation hypothesis are documented in the anthropological record in all regions of the world and among all subsistence strategies. Furthermore, differences in moral emphasis between different types of cultures tend to be non-significant and small when significant. This is evidence for moral universalism.


Here is my summary:

The study aimed to investigate potential moral universals across human societies by analyzing a large dataset of ethnographic texts describing the norms and practices of 256 societies from around the world. The researchers used machine learning and natural language processing techniques to identify recurring concepts and themes related to morality across the texts.

Some key findings:

1. Seven potential moral universals were identified as being very widespread across societies:
            Fairness/reciprocity
            Harm/care
            Deference to authorities/respect
            Loyalty to the in-group
            Purity/sanctity
            Liberty/oppression
            Ownership/property rights

2. However, there was also substantial variation in how these principles were interpreted and prioritized across cultures.

3. Certain potential universals like harm/care and fairness were more universally condemned when violations impacted one's own group versus other groups.

4. Societies' mobility, population density, and reliance on agriculture or animal husbandry seemed to influence the relative importance placed on different moral principles.

The authors argue that while there do appear to be some common moral foundations widespread across societies, there is also substantial cultural variation in how these are expressed and prioritized. They suggest morality emerges from an interaction of innate psychological foundations and cultural evolutionary processes.

Saturday, May 11, 2024

Can Robots have Personal Identity?

Alonso, M.
Int J of Soc Robotics 15, 211–220 (2023).
https://doi.org/10.1007/s12369-022-00958-y

Abstract

This article attempts to answer the question of whether robots can have personal identity. In recent years, and due to the numerous and rapid technological advances, the discussion around the ethical implications of Artificial Intelligence, Artificial Agents or simply Robots, has gained great importance. However, this reflection has almost always focused on problems such as the moral status of these robots, their rights, their capabilities or the qualities that these robots should have to support such status or rights. In this paper I want to address a question that has been much less analyzed but which I consider crucial to this discussion on robot ethics: the possibility, or not, that robots have or will one day have personal identity. The importance of this question has to do with the role we normally assign to personal identity as central to morality. After posing the problem and exposing this relationship between identity and morality, I will engage in a discussion with the recent literature on personal identity by showing in what sense one could speak of personal identity in beings such as robots. This is followed by a discussion of some key texts in robot ethics that have touched on this problem, finally addressing some implications and possible objections. I finally give the tentative answer that robots could potentially have personal identity, given other cases and what we empirically know about robots and their foreseeable future.


The article explores the idea of personal identity in robots. It acknowledges that this is a complex question tied to how we define "personhood" itself.

There are arguments against robots having personal identity, often focusing on the biological and experiential differences between humans and machines.

On the other hand, the article highlights that robots can develop and change over time, forming a narrative of self much like humans do. They can also build relationships with people, suggesting a form of "relational personal identity".

The article concludes that even if a robot's identity is different from a human's, it could still be considered a true identity, deserving of consideration. This opens the door to discussions about the ethical treatment of advanced AI.

Thursday, April 18, 2024

An artificial womb could build a bridge to health for premature babies

Rob Stein
npr.org
Originally posted 12 April 24

Here is an excerpt:

Scientific progress prompts ethical concerns

But the possibility of an artificial womb is also raising many questions. When might it be safe to try an artificial womb for a human? Which preterm babies would be the right candidates? What should they be called? Fetuses? Babies?

"It matters in terms of how we assign moral status to individuals," says Mercurio, the Yale bioethicist. "How much their interests — how much their welfare — should count. And what one can and cannot do for them or to them."

But Mercurio is optimistic those issues can be resolved, and the potential promise of the technology clearly warrants pursuing it.

The Food and Drug Administration held a workshop in September 2023 to discuss the latest scientific efforts to create an artificial womb, the ethical issues the technology raises, and what questions would have to be answered before allowing an artificial womb to be tested for humans.

"I am absolutely pro the technology because I think it has great potential to save babies," says Vardit Ravitsky, president and CEO of The Hastings Center, a bioethics think tank.

But there are particular issues raised by the current political and legal environment.

"My concern is that pregnant people will be forced to allow fetuses to be taken out of their bodies and put into an artificial womb rather than being allowed to terminate their pregnancies — basically, a new way of taking away abortion rights," Ravitsky says.

She also wonders: What if it becomes possible to use artificial wombs to gestate fetuses for an entire pregnancy, making natural pregnancy unnecessary?


Here are some general ethical concerns:

The use of artificial wombs raises several ethical and moral concerns. One key issue is the potential for artificial wombs to be used to extend the limits of fetal viability, which could complicate debates around abortion access and the moral status of the fetus. There are also concerns that artificial wombs could enable "designer babies" through genetic engineering and lead to the commodification of human reproduction. Additionally, some argue that developing a baby outside of a woman's uterus is inherently "unnatural" and could undermine the maternal-fetal bond.

 However, proponents contend that artificial wombs could save the lives of premature infants and provide options for women with high-risk pregnancies.  

 Ultimately, the ethics of artificial womb technology will require careful consideration of principles like autonomy, beneficence, and justice as this technology continues to advance.

Tuesday, April 16, 2024

As A.I.-Controlled Killer Drones Become Reality, Nations Debate Limits

Eric Lipton
The New York Times
Originally posted 21 Nov 23

Here is an excerpt:

Rapid advances in artificial intelligence and the intense use of drones in conflicts in Ukraine and the Middle East have combined to make the issue that much more urgent. So far, drones generally rely on human operators to carry out lethal missions, but software is being developed that soon will allow them to find and select targets more on their own.

The intense jamming of radio communications and GPS in Ukraine has only accelerated the shift, as autonomous drones can often keep operating even when communications are cut off.

“This isn’t the plot of a dystopian novel, but a looming reality,” Gaston Browne, the prime minister of Antigua and Barbuda, told officials at a recent U.N. meeting.

Pentagon officials have made it clear that they are preparing to deploy autonomous weapons in a big way.

Deputy Defense Secretary Kathleen Hicks announced this summer that the U.S. military would “field attritable, autonomous systems at scale of multiple thousands” in the coming two years, saying that the push to compete with China’s own investment in advanced weapons necessitated that the United States “leverage platforms that are small, smart, cheap and many.”

The concept of an autonomous weapon is not entirely new. Land mines — which detonate automatically — have been used since the Civil War. The United States has missile systems that rely on radar sensors to autonomously lock on to and hit targets.

What is changing is the introduction of artificial intelligence that could give weapons systems the capability to make decisions themselves after taking in and processing information.


Here is a summary:

This article discusses the debate at the UN regarding Lethal Autonomous Weapons (LAW) - essentially autonomous drones with AI that can choose and attack targets without human intervention. There are concerns that this technology could lead to unintended casualties, make wars more likely, and remove the human element from the decision to take a life.
  • Many countries are worried about the development and deployment of LAWs.
  • Austria and other countries are proposing a total ban on LAWs or at least strict regulations requiring human control and limitations on how they can be used.
  • The US, Russia, and China are opposed to a ban and argue that LAWs could potentially reduce civilian casualties in wars.
  • The US prefers non-binding guidelines over new international laws.
  • The UN is currently deadlocked on the issue with no clear path forward for creating regulations.

Saturday, March 30, 2024

How digital media drive affective polarization through partisan sorting

Törnberg, P. (2022).
PNAS of the United States of America,
119(42).

Abstract

Politics has in recent decades entered an era of intense polarization. Explanations have implicated digital media, with the so-called echo chamber remaining a dominant causal hypothesis despite growing challenge by empirical evidence. This paper suggests that this mounting evidence provides not only reason to reject the echo chamber hypothesis but also the foundation for an alternative causal mechanism. To propose such a mechanism, the paper draws on the literatures on affective polarization, digital media, and opinion dynamics. From the affective polarization literature, we follow the move from seeing polarization as diverging issue positions to rooted in sorting: an alignment of differences which is effectively dividing the electorate into two increasingly homogeneous megaparties. To explain the rise in sorting, the paper draws on opinion dynamics and digital media research to present a model which essentially turns the echo chamber on its head: it is not isolation from opposing views that drives polarization but precisely the fact that digital media bring us to interact outside our local bubble. When individuals interact locally, the outcome is a stable plural patchwork of cross-cutting conflicts. By encouraging nonlocal interaction, digital media drive an alignment of conflicts along partisan lines, thus effacing the counterbalancing effects of local heterogeneity. The result is polarization, even if individual interaction leads to convergence. The model thus suggests that digital media polarize through partisan sorting, creating a maelstrom in which more and more identities, beliefs, and cultural preferences become drawn into an all-encompassing societal division.

Significance

Recent years have seen a rapid rise of affective polarization, characterized by intense negative feelings between partisan groups. This represents a severe societal risk, threatening democratic institutions and constituting a metacrisis, reducing our capacity to respond to pressing societal challenges such as climate change, pandemics, or rising inequality. This paper provides a causal mechanism to explain this rise in polarization, by identifying how digital media may drive a sorting of differences, which has been linked to a breakdown of social cohesion and rising affective polarization. By outlining a potential causal link between digital media and affective polarization, the paper suggests ways of designing digital media so as to reduce their negative consequences.

Monday, March 25, 2024

Jean Maria Arrigo, Who Exposed Psychologists’ Ties to Torture, Dies at 79

Trip Gabriel
The New York Times
Originally published 19 March 24

Jean Maria Arrigo, a psychologist who exposed efforts by the American Psychological Association to obscure the role of psychologists in coercive interrogations of terror suspects in the aftermath of the Sept. 11, 2001, attacks, died on Feb. 24 at her home in Alpine, Calif. She was 79.

The cause was complications of pancreatic cancer, her husband, John Crigler, said.

A headline about her as a whistle-blower in The Guardian  in 2015 put it succinctly: “‘A National Hero’: Psychologist Who Warned of Torture Collusion Gets Her Due.”

A decade earlier, Dr. Arrigo had been named to a task force by the American Psychological Association, the largest professional group of psychologists, to examine the role of trained psychologists in national security interrogations.

The 10-member panel was formed in response to news reports in 2004 about abuse at the American-run Abu Ghraib prison in Iraq and at Guantánamo Bay in Cuba, which included details about psychologists aiding in interrogations that, according to the International Committee of the Red Cross, were “tantamount to torture.”

Dr. Arrigo later asserted that the A.P.A. task force was a sham — a public relations effort “to put out the fires of controversy right away,” as she told fellow psychologists in a wave-making speech in 2007.


Not all heroes wear capes.

Jean Maria Arrigo, a psychologist known for exposing the American Psychological Association's involvement in obscuring psychologists' roles in coercive interrogations post-9/11, passed away at 79 due to complications from pancreatic cancer. She was a whistleblower who revealed the APA's efforts to downplay psychologists' participation in interrogations deemed as torture. Arrigo criticized the APA's task force, stating it was a sham with ties to the Pentagon and conflicts of interest. Despite facing backlash and attacks from colleagues, she persisted in her crusade against APA complicity with brutal interrogations. Arrigo's work highlighted the ethical dilemmas faced by psychologists in national security contexts and emphasized the need for clear boundaries on involvement in such practices.

Thursday, March 21, 2024

AI-synthesized faces are indistinguishable from real faces and more trustworthy

Nightingale, S. J., & Farid, H. (2022).
PNAS of the USA, 119(8).

Abstract

Artificial intelligence (AI)–synthesized text, audio, image, and video are being weaponized for the purposes of nonconsensual intimate imagery, financial fraud, and disinformation campaigns. Our evaluation of the photorealism of AI-synthesized faces indicates that synthesis engines have passed through the uncanny valley and are capable of creating faces that are indistinguishable—and more trustworthy—than real faces.

Here is part of the Discussion section

Synthetically generated faces are not just highly photorealistic, they are nearly indistinguishable from real faces and are judged more trustworthy. This hyperphotorealism is consistent with recent findings. These two studies did not contain the same diversity of race and gender as ours, nor did they match the real and synthetic faces as we did to minimize the chance of inadvertent cues. While it is less surprising that White male faces are highly realistic—because these faces dominate the neural network training—we find that the realism of synthetic faces extends across race and gender. Perhaps most interestingly, we find that synthetically generated faces are more trustworthy than real faces. This may be because synthesized faces tend to look more like average faces which themselves are deemed more trustworthy. Regardless of the underlying reason, synthetically generated faces have emerged on the other side of the uncanny valley. This should be considered a success for the fields of computer graphics and vision. At the same time, easy access (https://thispersondoesnotexist.com) to such high-quality fake imagery has led and will continue to lead to various problems, including more convincing online fake profiles and—as synthetic audio and video generation continues to improve—problems of nonconsensual intimate imagery, fraud, and disinformation campaigns, with serious implications for individuals, societies, and democracies.

We, therefore, encourage those developing these technologies to consider whether the associated risks are greater than their benefits. If so, then we discourage the development of technology simply because it is possible. If not, then we encourage the parallel development of reasonable safeguards to help mitigate the inevitable harms from the resulting synthetic media. Safeguards could include, for example, incorporating robust watermarks into the image and video synthesis networks that would provide a downstream mechanism for reliable identification. Because it is the democratization of access to this powerful technology that poses the most significant threat, we also encourage reconsideration of the often laissez-faire approach to the public and unrestricted releasing of code for anyone to incorporate into any application.

Here are some important points:

This research raises concerns about the potential for misuse of AI-generated faces in areas like deepfakes and disinformation campaigns.

It also opens up interesting questions about how we perceive trust and authenticity in our increasingly digital world.

Thursday, March 14, 2024

A way forward for responsibility in the age of AI

Gogoshin, D.L.
Inquiry (2024)

Abstract

Whatever one makes of the relationship between free will and moral responsibility – e.g. whether it’s the case that we can have the latter without the former and, if so, what conditions must be met; whatever one thinks about whether artificially intelligent agents might ever meet such conditions, one still faces the following questions. What is the value of moral responsibility? If we take moral responsibility to be a matter of being a fitting target of moral blame or praise, what are the goods attached to them? The debate concerning ‘machine morality’ is often hinged on whether artificial agents are or could ever be morally responsible, and it is generally taken for granted (following Matthias 2004) that if they cannot, they pose a threat to the moral responsibility system and associated goods. In this paper, I challenge this assumption by asking what the goods of this system, if any, are, and what happens to them in the face of artificially intelligent agents. I will argue that they neither introduce new problems for the moral responsibility system nor do they threaten what we really (ought to) care about. I conclude the paper with a proposal for how to secure this objective.


Here is my summary:

While AI may not possess true moral agency, it's crucial to consider how the development and use of AI can be made more responsible. The author challenges the assumption that AI's lack of moral responsibility inherently creates problems for our current system of ethics. Instead, they focus on the "goods" this system provides, such as deserving blame or praise, and how these can be upheld even with AI's presence. To achieve this, the author proposes several steps, including:
  1. Shifting the focus from AI's moral agency to the agency of those who design, build, and use it. This means holding these individuals accountable for the societal impacts of AI.
  2. Developing clear ethical guidelines for AI development and use. These guidelines should be comprehensive, addressing issues like fairness, transparency, and accountability.
  3. Creating robust oversight mechanisms. This could involve independent bodies that monitor AI development and use, and have the power to intervene when necessary.
  4. Promoting public understanding of AI. This will help people make informed decisions about how AI is used in their lives and hold developers and users accountable.

Tuesday, March 12, 2024

Discerning Saints: Moralization of Intrinsic Motivation and Selective Prosociality at Work

Kwon, M., Cunningham, J. L., & 
Jachimowicz, J. M. (2023).
Academy of Management Journal, 66(6),
1625–1650.

Abstract

Intrinsic motivation has received widespread attention as a predictor of positive work outcomes, including employees’ prosocial behavior. We offer a more nuanced view by proposing that intrinsic motivation does not uniformly increase prosocial behavior toward all others. Specifically, we argue that employees with higher intrinsic motivation are more likely to value intrinsic motivation and associate it with having higher morality (i.e., they moralize it). When employees moralize intrinsic motivation, they perceive others with higher intrinsic motivation as being more moral and thus engage in more prosocial behavior toward those others, and judge others who are less intrinsically motivated as less moral and thereby engage in less prosocial behaviors toward them. We provide empirical support for our theoretical model across a large-scale, team-level field study in a Latin American financial institution (n = 784, k = 185) and a set of three online studies, including a preregistered experiment (n = 245, 243, and 1,245), where we develop a measure of the moralization of intrinsic motivation and provide both causal and mediating evidence. This research complicates our understanding of intrinsic motivation by revealing how its moralization may at times dim the positive light of intrinsic motivation itself.

The article is paywalled.  Here are some thoughts:

This study focuses on how intrinsically motivated employees (those who enjoy their work) might act differently towards other employees depending on their own level of intrinsic motivation. The key points are:

Main finding: Employees with high intrinsic motivation tend to associate higher morality with others who also have high intrinsic motivation. This leads them to offer more help and support to those similar colleagues, while judging and helping less to those with lower intrinsic motivation.

Theoretical framework: The concept of "moralization of intrinsic motivation" (MOIM) explains this behavior. Essentially, intrinsic motivation becomes linked to moral judgment, influencing who is seen as "good" and deserving of help.

Implications:
  • For theory: This research adds a new dimension to understanding intrinsic motivation, highlighting the potential for judgment and selective behavior.
  • For practice: Managers and leaders should be aware of the unintended consequences of promoting intrinsic motivation, as it might create bias and division among employees.
  • For employees: Those lacking intrinsic motivation might face disadvantages due to judgment from colleagues. They could try job crafting or seeking alternative support strategies.
Overall, the study reveals a nuanced perspective on intrinsic motivation, acknowledging its positive aspects while recognizing its potential to create inequality and ethical concerns.

Monday, March 11, 2024

Why People Fail to Notice Horrors Around Them

Tali Sharot and Cass R. Sunstein
The New York Times
Originally posted 25 Feb 24

The miraculous history of our species is peppered with dark stories of oppression, tyranny, bloody wars, savagery, murder and genocide. When looking back, we are often baffled and ask: Why weren't the horrors halted earlier? How could people have lived with them?

The full picture is immensely complicated. But a significant part of it points to the rules that govern the operations of the human brain.

Extreme political movements, as well as deadly conflicts, often escalate slowly. When threats start small and increase gradually, they end up eliciting a weaker emotional reaction, less resistance and more acceptance than they would otherwise. The slow increase allows larger and larger horrors to play out in broad daylight- taken for granted, seen as ordinary.

One of us is a neuroscientist; the other is a law professor. From our different fields, we have come to believe that it is not possible to understand the current period - and the shifts in what counts as normal - without appreciating why and how people do not notice so much of what we live with.

The underlying reason is a pivotal biological feature of our brain: habituation, or our tendency to respond less and less to things that are constant or that change slowly. You enter a cafe filled with the smell of coffee and at first the smell is overwhelming, but no more than 20 minutes go by and you cannot smell it any longer. This is because your olfactory neurons stop firing in response to a now-familiar odor.

Similarly, you stop hearing the persistent buzz of an air-conditioner because your brain filters out background noise. Your brain cares about what recently changed, not about what remained the same.
Habituation is one of our most basic biological characteristics - something that we two-legged, bigheaded creatures share with other animals on earth, including apes, elephants, dogs, birds, frogs, fish and rats. Human beings also habituate to complex social circumstances such as war, corruption, discrimination, oppression, widespread misinformation and extremism. Habituation does not only result in a reduced tendency to notice and react to grossly immoral deeds around us; it also increases the likelihood that we will engage in them ourselves.


Here is my summary:

From a psychological perspective, the failure to notice horrors around us can be attributed to cognitive biases and the human tendency to see reality in predictable yet flawed ways. This phenomenon is linked to how individuals perceive and value certain aspects of their environment. Personal values play a crucial role in shaping our perceptions and emotional responses. When there is a discrepancy between our self-perception and reality, it can lead to various troubles as our values define us and influence how we react to events. Additionally, the concept of safety needs is highlighted as a mediating factor in mental disorders induced by stressful events. The unexpected nature of events can trigger fear and anger, while the anticipation of events can induce calmness. This interplay between safety needs, emotions, and pathological conditions underscores how individuals react to perceived threats and unexpected situations, impacting their mental well-being

Sunday, March 10, 2024

MAGA’s Violent Threats Are Warping Life in America

David French
New York Times - Opinion
Originally published 18 Feb 24

Amid the constant drumbeat of sensational news stories — the scandals, the legal rulings, the wild political gambits — it’s sometimes easy to overlook the deeper trends that are shaping American life. For example, are you aware how much the constant threat of violence, principally from MAGA sources, is now warping American politics? If you wonder why so few people in red America seem to stand up directly against the MAGA movement, are you aware of the price they might pay if they did?

Late last month, I listened to a fascinating NPR interview with the journalists Michael Isikoff and Daniel Klaidman regarding their new book, “Find Me the Votes,” about Donald Trump’s efforts to overturn the 2020 election. They report that Georgia prosecutor Fani Willis had trouble finding lawyers willing to help prosecute her case against Trump. Even a former Georgia governor turned her down, saying, “Hypothetically speaking, do you want to have a bodyguard follow you around for the rest of your life?”

He wasn’t exaggerating. Willis received an assassination threat so specific that one evening she had to leave her office incognito while a body double wearing a bulletproof vest courageously pretended to be her and offered a target for any possible incoming fire.


Here is my summary of the article:

David French discusses the pervasive threat of violence, particularly from MAGA sources, and its impact on American politics. The author highlights instances where individuals faced intimidation and threats for opposing the MAGA movement, such as a Georgia prosecutor receiving an assassination threat and judges being swatted. The article also mentions the significant increase in threats against members of Congress since Trump took office, with Capitol Police opening over 8,000 threat assessments in a year. The piece sheds light on the chilling effect these threats have on individuals like Mitt Romney, who spends $5,000 per day on security, and lawmakers who fear for their families' safety. The overall narrative underscores how these violent threats are shaping American life and politics

Thursday, March 7, 2024

Canada Postpones Plan to Allow Euthanasia for Mentally Ill

Craig McCulloh
Voice of America News
Originally posted 8 Feb 24

The Canadian government is delaying access to medically assisted death for people with mental illness.

Those suffering from mental illness were supposed to be able to access Medical Assistance in Dying — also known as MAID — starting March 17. The recent announcement by the government of Canadian Prime Minister Justin Trudeau was the second delay after original legislation authorizing the practice passed in 2021.

The delay came in response to a recommendation by a majority of the members of a committee made up of senators and members of Parliament.

One of the most high-profile proponents of MAID is British Columbia-based lawyer Chris Considine. In the mid-1990s, he represented Sue Rodriguez, who was dying from amyotrophic lateral sclerosis, commonly known as ALS.

Their bid for approval of a medically assisted death was rejected at the time by the Supreme Court of Canada. But a law passed in 2016 legalized euthanasia for individuals with terminal conditions. From then until 2022, more than 45,000 people chose to die.


Summary:

Canada originally planned to expand its Medical Assistance in Dying (MAiD) program to include individuals with mental illnesses in March 2024.
  • This plan has been postponed until 2027 due to concerns about the healthcare system's readiness and potential ethical issues.
  • The original legislation passed in 2021, but concerns about safeguards and mental health support led to delays.
  • This issue is complex and ethically charged, with advocates arguing for individual autonomy and opponents raising concerns about coercion and vulnerability.
I would be concerned about the following issues:
  • Vulnerability: Mental illness can impair judgement, raising concerns about informed consent and potential coercion.
  • Safeguards: Concerns exist about insufficient safeguards to prevent abuse or exploitation.
  • Mental health access: Limited access to adequate mental health treatment could contribute to undue pressure towards MAiD.
  • Social inequalities: Concerns exist about disproportionate access to MAiD based on socioeconomic background.

Wednesday, March 6, 2024

We're good people: Moral conviction as social identity

Ekstrom, P. D. (2022, April 27).

Abstract

Moral convictions—attitudes that people construe as matters of right and wrong—have unique effects on behavior, from activism to intolerance. Less is known, though, about the psychological underpinnings of moral convictions themselves. I propose that moral convictions are social identities. Consistent with the idea that moral convictions are identities, I find in two studies that attitude-level moral conviction predicts (1) attitudes’ self-reported identity centrality and (2) reaction time to attitude-related stimuli in a me/not me task. Consistent with the idea that moral convictions are social identities, I find evidence that participants used their moral convictions to perceive, categorize, and remember information about other individuals’ positions on political issues, and that they did so more strongly when their convictions were more identity-central. In short, the identities that participants’ moral convictions defined were also meaningful social categories, providing a basis to distinguish “us” from “them.” However, I also find that non-moral attitudes can serve as meaningful social categories. Although moral convictions were more identity-central than non-moral attitudes, moral and non-moral attitudes may both define social identities that are more or less salient in certain situations. Regardless, social identity may help explain intolerance for moral disagreement, and identity-based interventions may help reduce that intolerance.

Here is my summary:

Main Hypothesis:
  • Moral convictions (beliefs about right and wrong) are seen as fundamental and universally true, distinct from other attitudes.
  • The research proposes that they shape how people view themselves and others, acting as social identities.
Key Points:
  • Moral convictions define group belonging: People use them to categorize themselves and others as "good" or "bad," similar to how we might use group affiliations like race or religion.
  • They influence our relationships: We tend to be more accepting and trusting of those who share our moral convictions.
  • They can lead to conflict: When morals clash, it can create animosity and division between groups with different convictions.
Evidence:
  • The research cites studies showing how people judge others based on their moral stances, similar to how they judge based on group membership.
  • It also shows how moral convictions predict behavior like activism and intolerance towards opposing views.
Implications:
  • Understanding how moral convictions function as social identities can help explain conflict, prejudice, and social movements.
  • It may also offer insights into promoting understanding and cooperation between groups with differing moral beliefs.
Overall:

This research suggests that moral convictions are more than just strong opinions; they act as powerful social identities shaping how we see ourselves and interact with others. Understanding this dynamic can offer valuable insights into social behavior and potential avenues for promoting tolerance and cooperation.

Tuesday, March 5, 2024

You could lie to a health chatbot – but it might change how you perceive yourself

Dominic Wilkinson
The Conversation
Originally posted 8 FEB 24

Here is an excerpt:

The ethics of lying

There are different ways that we can think about the ethics of lying.

Lying can be bad because it causes harm to other people. Lies can be deeply hurtful to another person. They can cause someone to act on false information, or to be falsely reassured.

Sometimes, lies can harm because they undermine someone else’s trust in people more generally. But those reasons will often not apply to the chatbot.

Lies can wrong another person, even if they do not cause harm. If we willingly deceive another person, we potentially fail to respect their rational agency, or use them as a means to an end. But it is not clear that we can deceive or wrong a chatbot, since they don’t have a mind or ability to reason.

Lying can be bad for us because it undermines our credibility. Communication with other people is important. But when we knowingly make false utterances, we diminish the value, in other people’s eyes, of our testimony.

For the person who repeatedly expresses falsehoods, everything that they say then falls into question. This is part of the reason we care about lying and our social image. But unless our interactions with the chatbot are recorded and communicated (for example, to humans), our chatbot lies aren’t going to have that effect.

Lying is also bad for us because it can lead to others being untruthful to us in turn. (Why should people be honest with us if we won’t be honest with them?)

But again, that is unlikely to be a consequence of lying to a chatbot. On the contrary, this type of effect could be partly an incentive to lie to a chatbot, since people may be conscious of the reported tendency of ChatGPT and similar agents to confabulate.


Here is my summary:

The article discusses the potential consequences of lying to a health chatbot, even though it might seem tempting. It highlights a situation where someone frustrated with a wait for surgery considers exaggerating their symptoms to a chatbot screening them.

While lying might offer short-term benefits like quicker attention, the author argues it could have unintended consequences:

Impact on healthcare:
  • Inaccurate information can hinder proper diagnosis and treatment.
  • It contributes to an already strained healthcare system.
Self-perception:
  • Repeatedly lying, even to a machine, can erode honesty and integrity.
  • It reinforces unhealthy avoidance of seeking professional help.
The article encourages readers to be truthful with chatbots for better healthcare outcomes and self-awareness. It acknowledges the frustration with healthcare systems but emphasizes the importance of transparency for both individual and collective well-being.