Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Friday, June 30, 2023

The psychology of zero-sum beliefs

Davidai, S., Tepper, S.J. 
Nat Rev Psychol (2023). 

Abstract

People often hold zero-sum beliefs (subjective beliefs that, independent of the actual distribution of resources, one party’s gains are inevitably accrued at other parties’ expense) about interpersonal, intergroup and international relations. In this Review, we synthesize social, cognitive, evolutionary and organizational psychology research on zero-sum beliefs. In doing so, we examine when, why and how such beliefs emerge and what their consequences are for individuals, groups and society.  Although zero-sum beliefs have been mostly conceptualized as an individual difference and a generalized mindset, their emergence and expression are sensitive to cognitive, motivational and contextual forces. Specifically, we identify three broad psychological channels that elicit zero-sum beliefs: intrapersonal and situational forces that elicit threat, generate real or imagined resource scarcity, and inhibit deliberation. This systematic study of zero-sum beliefs advances our understanding of how these beliefs arise, how they influence people’s behaviour and, we hope, how they can be mitigated.

From the Summary and Future Directions section

We have suggested that zero-sum beliefs are influenced by threat, a sense of resource scarcity and lack of deliberation. Although each of these three channels can separately lead to zero-sum beliefs, simultaneously activating more than one channel might be especially potent. For instance, focusing on losses (versus gains) is both threatening and heightens a sense of resource scarcity. Consequently, focusing on losses might be especially likely to foster zero-sum beliefs. Similarly, insufficient deliberation on the long-term and dynamic effects of international trade might foster a view of domestic currency as scarce, prompting the belief that trade is zero-sum. Thus, any factor that simultaneously affects the threat that people experience, their perceptions of resource scarcity, and their level of deliberation is more likely to result in zero-sum beliefs, and attenuating zero-sum beliefs requires an exploration of all the different factors that lead to these experiences in the first place. For instance, increasing deliberation reduces zero-sum beliefs about negotiations by increasing people’s accountability, perspective taking or consideration of mutually beneficial issues. Future research could manipulate deliberation in other contexts to examine its causal effect on zero-sum beliefs. Indeed, because people express more moderate beliefs after deliberating policy details, prompting participants to deliberate about social issues (for example, asking them to explain the process by which one group’s outcomes influence another group’s outcomes) might reduce zero-sum beliefs. More generally, research could examine long-term and scalable solutions for reducing zero-sum beliefs, focusing on interventions that simultaneously reduce threat, mitigate views of resource scarcity and increase deliberation.  For instance, as formal training in economics is associated with lower zero-sum beliefs, researchers could examine whether teaching people basic economic principles reduces zero-sum beliefs across various domains. Similarly, because higher socioeconomic status is negatively associated with zero-sum beliefs, creating a sense of abundance might counter the belief that life is zero-sum.

Thursday, June 29, 2023

Fairytales have always reflected the morals of the age. It’s not a sin to rewrite them

Martha Gill
The Guardian
Originally posted 4 June 23

Here are two excerpts:

General outrage greeted “woke” updates to Roald Dahl books this year, and still periodically erupts over Disney remakes, most recently a forthcoming film with a Latina actress as Snow White, and a new Peter Pan & Wendy with “lost girls”. The argument is that too much fashionable refurbishment tends to ruin a magical kingdom, and that cult classics could do with the sort of Grade I listing applied to heritage buildings. If you want to tell new stories, fine – but why not start from scratch?

But this point of view misses something, which is that updating classics is itself an ancient part of literary culture; in fact, it is a tradition, part of our heritage too. While the larger portion of the literary canon is carefully preserved, a slice of it has always been more flexible, to be retold and reshaped as times change.

Fairytales fit within this latter custom: they have been updated, periodically, for many hundreds of years. Cult figures such as Dracula, Frankenstein and Sherlock Holmes fit there too, as do superheroes: each generation, you might say, gets the heroes it deserves. And so does Bond. Modernity is both a villain and a hero within the Bond franchise: the dramatic tension between James – a young cosmopolitan “dinosaur” – and the passing of time has always been part of the fun.

This tradition has a richness to it: it is a historical record of sorts. Look at the progress of the fairy story through the ages and you get a twisty tale of dubious progress, a moral journey through the woods. You could say fairytales have always been politically correct – that is, tweaked to reflect whatever morals a given cohort of parents most wanted to teach their children.

(cut)

The idea that we are pasting over history – censoring important artefacts – is wrongheaded too. It is not as if old films or books have been burned, wiped from the internet or removed from libraries. With today’s propensity for writing things down, common since the 1500s, there is no reason to fear losing the “original” stories.

As for the suggestion that minority groups should make their own stories instead – this is a sly form of exclusion. Ancient universities and gentlemen’s clubs once made similar arguments; why couldn’t exiled individuals simply set up their own versions? It is not so easy. Old stories weave themselves deep into the tapestry of a nation; newer ones will necessarily be confined to the margins.


My take: Updating classic stories can be beneficial and even necessary to promote inclusion, diversity, equity, and fairness. By not updating these stories, we risk perpetuating harmful stereotypes and narratives that reinforce the dominant culture. When we update classic stories, we can create new possibilities for representation and understanding that can help to build a more just and equitable world.  Dominant cultures need to cede power to promote more unity in a multicultural nation.

Wednesday, June 28, 2023

Forgetting is a Feature, not a Bug: Intentionally Forgetting Some Things Helps Us Remember Others by Freeing up Working Memory Resources

Popov, V., Marevic, I., Rummel, J., & Reder, L. M. (2019).
Psychological Science, 30(9), 1303–1317.
https://doi.org/10.1177/0956797619859531

Abstract

We used an item-method directed forgetting paradigm to test whether instructions to forget or to remember one item in a list affects memory for the subsequent item in that list. In two experiments, we found that free and cued recall were higher when a word-pair was preceded during study by a to-be-forgotten (TBF) word pair. This effect was cumulative – performance was higher when more of the preceding items during study were TBF. It also interacted with lag between study items – the effect decreased as the lag between the current and a prior item increased.  Experiment 2 used a dual-task paradigm in which we suppressed either verbal rehearsal or attentional refreshing during encoding. We found that neither task removed the effect, thus the advantage from previous TBF items could not be due to rehearsal or attentional borrowing. We propose that storing items in long-term memory depletes a limited pool of resources that recovers over time, and that TBF items deplete fewer resources, leaving more available for storing subsequent items. A computational model implementing the theory provided excellent fits to the data.

General Discussion

We demonstrated a previously unknown DF (Directed Forgetting) after-effect of remember and forget instructions in an item method DF paradigm on memory for the items that follow a pair that was to be remembered versus forgotten: cued and free recall for word pairs was higher when people were instructed to forget the preceding word pair. This effect was cumulative, such that performance was even better when more of the preceding pairs had to be forgotten. The size of the DF after-effect depended on how many pairs ago the DF instruction appeared during study. Specifically, the immediately preceding word-pair provided a stronger DF aftereffect than when the DF instruction appeared several word-pairs ago. Finally, neither increased rehearsal nor attentional borrowing of TBR items could explain why memory for the subsequent item was worse in those cases – the DF after-effects remained stable, even when rehearsal was suppressed or attention divided in a dual-task paradigm.

The DF after-effects are replicable and are remarkably consistent across the two experiments – the odds
ratio associated with items preceded by TBR items rather than TBF items at lag one was 0.66 in the prior
study and 0.67 in the new experiment. Similarly, the odds ratio for the effect of cues at lag two were 0.77
and 0.76 in the two studies. Thus, this represents a robust and replicable phenomenon. Additionally, the
multinomial storage–retrieval model confirmed that DF after-effects are clearly a storage phenomenon.


Summary: forgetting is not always a bad thing. In fact, it can sometimes be helpful. For example, if we are trying to learn a new skill, it may be helpful to forget some of the old information that is no longer relevant. This will free up working memory resources, which can then be used to store the new information. It may be helpful to include instructions to forget some information in learning materials. This will help to ensure that the learners are able to focus on the most important information.

Tuesday, June 27, 2023

All human social groups are human, but some are more human than others

Morehouse, K. N., Maddox, K. B., & 
Banaji, M. R. (2023). PNAS.
120(22), e2300995120. 

Abstract

All human groups are equally human, but are they automatically represented as such? Harnessing data from 61,377 participants across 13 experiments (six primary and seven supplemental), a sharp dissociation between implicit and explicit measures emerged. Despite explicitly affirming the equal humanity of all racial/ethnic groups, White participants consistently associated Human (relative to Animal) more with White than Black, Hispanic, and Asian groups on Implicit Association Tests (IATs; experiments 1–4). This effect emerged across diverse representations of Animal that varied in valence (pets, farm animals, wild animals, and vermin; experiments 1–2). Non-White participants showed no such Human=Own Group bias (e.g., Black participants on a White–Black/Human–Animal IAT). However, when the test included two outgroups (e.g., Asian participants on a White–Black/Human–Animal IAT), non-White participants displayed Human=White associations. The overall effect was largely invariant across demographic variations in age, religion, and education but did vary by political ideology and gender, with self-identified conservatives and men displaying stronger Human=White associations (experiment 3). Using a variance decomposition method, experiment 4 showed that the Human=White effect cannot be attributed to valence alone; the semantic meaning of Human and Animal accounted for a unique proportion of variance. Similarly, the effect persisted even when Human was contrasted with positive attributes (e.g., God, Gods, and Dessert; experiment 5a). Experiments 5a-b clarified the primacy of Human=White rather than Animal=Black associations. Together, these experiments document a factually erroneous but robust Human=Own Group implicit stereotype among US White participants (and globally), with suggestive evidence of its presence in other socially dominant groups.

Significance

All humans belong to the species Homo sapiens. Yet, throughout history, humans have breathed life into the Orwellian adage that “All [humans] are equal, but some [humans] are more equal than others.” Here, participants staunchly rejected this adage, with the overwhelming majority of over 61,000 participants reporting that all humans are equally human. However, across 13 experiments, US White participants (and White participants abroad) showed robust evidence of an implicit Human=Own Group association. Conversely, Black, Latinx, and Asian participants in the United States did not demonstrate this bias. These results highlight the tendency among socially dominant groups to reserve the quality Human for their own kind, producing, even in the 21st century, the age-old error of pseudospeciation.

My summary:

These results suggest that US White participants implicitly view White people as more human than Black or Hispanic people.

The authors also found that these implicit associations were not simply a reflection of participants' explicit beliefs about race. In fact, when participants were asked to explicitly rate how human they believed different racial/ethnic groups were, they rated all groups as equally human. This suggests that implicit associations are not always accessible to conscious awareness, and that they can have a significant impact on our behavior even when we are unaware of them.

The authors conclude that their findings suggest that implicit bias against Black and Hispanic people is widespread in the United States. They argue that this bias can have a number of negative consequences, including discrimination in employment, housing, and education. They also suggest that interventions to reduce implicit bias are needed to create a more just and equitable society.

Said slightly differently, the Dominant Group's "myside bias" is implicit, autonomic, unconsious, and difficult to change.  White dominant culture needs to take extra steps to level the playing field of our society.

Monday, June 26, 2023

Characterizing empathy and compassion using computational linguistic analysis

Yaden, D. B., Giorgi, S., et al. (2023). 
Emotion. Advance online publication.

Abstract

Many scholars have proposed that feeling what we believe others are feeling—often known as “empathy”—is essential for other-regarding sentiments and plays an important role in our moral lives. Caring for and about others (without necessarily sharing their feelings)—often known as “compassion”—is also frequently discussed as a relevant force for prosocial motivation and action. Here, we explore the relationship between empathy and compassion using the methods of computational linguistics. Analyses of 2,356,916 Facebook posts suggest that individuals (N = 2,781) high in empathy use different language than those high in compassion, after accounting for shared variance between these constructs. Empathic people, controlling for compassion, often use self-focused language and write about negative feelings, social isolation, and feeling overwhelmed. Compassionate people, controlling for empathy, often use other-focused language and write about positive feelings and social connections. In addition, high empathy without compassion is related to negative health outcomes, while high compassion without empathy is related to positive health outcomes, positive lifestyle choices, and charitable giving. Such findings favor an approach to moral motivation that is grounded in compassion rather than empathy.

From the General Discussion

Linguistic topics related to compassion (without empathy) and empathy (without compassion) show clear relationships with four of the five personality factors. Topics related to compassion without empathy are marked by higher conscientiousness, extraversion, agreeableness, and emotional stability. Empathy without compassion topics are more associated with introversion and are also moderately associated with neuroticism and lower conscientiousness.  The association of low emotional stability and conscientiousness is also in line with prior research that found “distress,”a construct with important parallels to empathy, being associated with fleeing from a helping situation (Batson et al., 1987) and with lower helping(Jordan et al., 2016;Schroeder et al., 1988; Twenge et al., 2007; and others).

In sum, it appears that compassion without empathy and empathy without compassion are at least somewhat distinct and have unique predictive validity in personality, health, and prosocial behavior.  While the mechanisms through which these different relationships occur remain unknown, some previous work bears on this issue.  As mentioned, other work has found that merely focusing on others resulted in more intentions to help others (Bloom, 2017;Davis,1983;Jordan et al., 2016), which helps to explain the relationship between the more other-focused compassion and donation behavior that we observed.


In sum, high empathy without compassion is related to negative health outcomes, while high compassion without empathy is related to positive health outcomes. These findings suggest that compassion may be a more important factor for moral motivation than empathy.  Too much empathy may be overwhelming for high quality care.  Care about feelings, don't absorb the sharing of feelings.

Sunday, June 25, 2023

Harvard Business School Professor Francesca Gino Accused of Committing Data Fraud

Rahem D. Hamid
Crimson Staff Writer
Originally published 24 June 23

Here is an excerpt:

But in a post on June 17, Data Colada wrote that they found evidence of additional data fabrication in that study in a separate experiment that Gino was responsible for.

Harvard has also been internally investigating “a series of papers” for more than a year, according to the Chronicle of Higher Education. Data Colada wrote last week that the University’s internal report may be around 1,200 pages.

The professors added that Harvard has requested that three other papers co-authored by Gino — which Data Colada flagged — also be retracted and that the 2012 paper’s retraction be amended to include Gino’s fabrications.

Last week, Bazerman told the Chronicle of Higher Education that he was informed by Harvard that the experiments he co-authored contained additional fraudulent data.

Bazerman called the evidence presented to him by the University “compelling,” but he denied to the Chronicle that he was at all involved with the data manipulation.

According to Data Colada, Gino was “the only author involved in the data collection and analysis” of the experiment in question.

“To the best of our knowledge, none of Gino’s co-authors carried out or assisted with the data collection for the studies in question,” the professors wrote.

In their second post on Tuesday, the investigators wrote that a 2015 study co-authored by Gino also contains manipulations to prove the paper’s hypothesis.

Observations in the paper, the three wrote, “were altered to produce the desired effect.”

“And if these observations were altered, then it is reasonable to suspect that other observations were altered as well,” they added.


Science is a part of a healthy society:
  • Scientific research relies on the integrity of the researchers. When researchers fabricate or falsify data, they undermine the trust that is necessary for scientific progress.
  • Data fraud can have serious consequences. It can lead to the publication of false or misleading findings, which can have a negative impact on public policy, business decisions, and other areas.

Saturday, June 24, 2023

The Darwinian Argument for Worrying About AI

Dan Hendrycks
Time.com
Originally posted 31 May 23

Here is an excerpt:

In the biological realm, evolution is a slow process. For humans, it takes nine months to create the next generation and around 20 years of schooling and parenting to produce fully functional adults. But scientists have observed meaningful evolutionary changes in species with rapid reproduction rates, like fruit flies, in fewer than 10 generations. Unconstrained by biology, AIs could adapt—and therefore evolve—even faster than fruit flies do.

There are three reasons this should worry us. The first is that selection effects make AIs difficult to control. Whereas AI researchers once spoke of “designing” AIs, they now speak of “steering” them. And even our ability to steer is slipping out of our grasp as we let AIs teach themselves and increasingly act in ways that even their creators do not fully understand. In advanced artificial neural networks, we understand the inputs that go into the system, but the output emerges from a “black box” with a decision-making process largely indecipherable to humans.

Second, evolution tends to produce selfish behavior. Amoral competition among AIs may select for undesirable traits. AIs that successfully gain influence and provide economic value will predominate, replacing AIs that act in a more narrow and constrained manner, even if this comes at the cost of lowering guardrails and safety measures. As an example, most businesses follow laws, but in situations where stealing trade secrets or deceiving regulators is highly lucrative and difficult to detect, a business that engages in such selfish behavior will most likely outperform its more principled competitors.

Selfishness doesn’t require malice or even sentience. When an AI automates a task and leaves a human jobless, this is selfish behavior without any intent. If competitive pressures continue to drive AI development, we shouldn’t be surprised if they act selfishly too.

The third reason is that evolutionary pressure will likely ingrain AIs with behaviors that promote self-preservation. Skeptics of AI risks often ask, “Couldn’t we just turn the AI off?” There are a variety of practical challenges here. The AI could be under the control of a different nation or a bad actor. Or AIs could be integrated into vital infrastructure, like power grids or the internet. When embedded into these critical systems, the cost of disabling them may prove too high for us to accept since we would become dependent on them. AIs could become embedded in our world in ways that we can’t easily reverse. But natural selection poses a more fundamental barrier: we will select against AIs that are easy to turn off, and we will come to depend on AIs that we are less likely to turn off.

These strong economic and strategic pressures to adopt the systems that are most effective mean that humans are incentivized to cede more and more power to AI systems that cannot be reliably controlled, putting us on a pathway toward being supplanted as the earth’s dominant species. There are no easy, surefire solutions to our predicament.

Friday, June 23, 2023

In the US, patient data privacy is an illusion

Harlan M Krumholz
Opinion
BMJ 2023;381:p1225

Here is an excerpt:

The regulation allows anyone involved in a patient’s care to access health information about them. It is based on the paternalistic assumption that for any healthcare provider or related associate to be able to provide care for a patient, unfettered access to all of that individual’s health records is required, regardless of the patient’s preference. This provision removes control from the patient’s hands for choices that should be theirs alone to make. For example, the pop-up covid testing service you may have used can claim to be an entity involved in your care and gain access to your data. This access can be bought through many for-profit companies. The urgent care centre you visited for your bruised ankle can access all your data. The team conducting your prenatal testing is considered involved in your care and can access your records. Health insurance companies can obtain all the records. And these are just a few examples.

Moreover, health systems legally transmit sensitive information with partners, affiliates, and vendors through Business Associate Agreements. But patients may not want their sensitive information disseminated—they may not want all their identified data transmitted to a third party through contracts that enable those companies to sell their personal information if the data are de-identified. And importantly, with all the advances in data science, effectively de-identifying detailed health information is almost impossible.

HIPAA confers ample latitude to these third parties. As a result, companies make massive profits from the sale of data. Some companies claim to be able to provide comprehensive health information on more than 300 million Americans—most of the American public—for a price. These companies' business models are legal, yet most patients remain in the dark about what may be happening to their data.

However, massive accumulations of medical data do have the potential to produce insights into medical problems and accelerate progress towards better outcomes. And many uses of a patient’s data, despite moving throughout the healthcare ecosystem without their knowledge, may nevertheless help advance new diagnostics and therapeutics. The critical questions surround the assumptions people should have about their health data and the disclosures that should be made before a patient speaks with a health professional. Should each person be notified before interacting with a healthcare provider about what may happen with the information they share or the data their tests reveal? Are there new technologies that could help patients regain control over their data?

Although no one would relish a return to paper records, that cumbersome system at least made it difficult for patients’ data to be made into a commodity. The digital transformation of healthcare data has enabled wonderous breakthroughs—but at the cost of our privacy. And as computational power and more clever means of moving and organising data emerge, the likelihood of permission-based privacy will recede even further.

Thursday, June 22, 2023

The psychology of asymmetric zero-sum beliefs

Roberts, R., & Davidai, S. (2022).
Journal of Personality and Social Psychology, 
123(3), 559–575.

Abstract

Zero-sum beliefs reflect the perception that one party’s gains are necessarily offset by another party’s losses. Although zero-sum relationships are, from a strictly theoretical perspective, symmetrical, we find evidence for asymmetrical zero-sum beliefs: The belief that others gain at one’s own expense, but not vice versa. Across various contexts (international relations, interpersonal negotiations, political partisanship, organizational hierarchies) and research designs (within- and between-participant), we find that people are more prone to believe that others’ success comes at their own expense than they are to believe that their own success comes at others’ expense. Moreover, we find that people exhibit asymmetric zero-sum beliefs only when thinking about how their own party relates to other parties but not when thinking about how other parties relate to each other. Finally, we find that this effect is moderated by how threatened people feel by others’ success and that reassuring people about their party’s strengths eliminates asymmetric zero-sum beliefs. We discuss the theoretical contributions of our findings to research on interpersonal and intergroup zero-sum beliefs and their implications for understanding when and why people view life as zero-sum.

From the Discussion Section

Beyond documenting a novel asymmetry in beliefs about one’s own and others’ gains and losses, our findings make several important theoretical contributions to the literature on zero-sum beliefs. First, research on zero-sum beliefs has mostly focused on what specific groups believe about others’ gains within threatening intergroup contexts (e.g., White Americans’ attitudes about Black Americans’ gains, men’s attitudes about women’s gains) or on what negotiators believe about their counterparts’ gains within the context of a negotiation (which is typically rife with threat; e.g., Sinaceur et al., 2011; White et al., 2004). In doing so, research has examined zero-sum beliefs from only one perspective: how threatened parties view outgroup gains. Yet, as shown, those who feel most threatened are also most likely to exhibit zero-sum beliefs. By only examining the beliefs of those who feel threatened by others within the specific contexts in which they feel most threatened, the literature may have painted an incomplete picture of zero-sum beliefs that overlooks the possibility of asymmetrical beliefs. Our research expands this work by examining zero-sum beliefs in both threatening and nonthreatening contexts and by examining beliefs about one’s own and others’ gains, revealing that feeling.


I use the research on zero-sum thinking in couples counseling frequently, to help the couple to develop a more cooperative mindset. This means that they need to be willing to work together to find solutions that benefit both of them. When couples can learn to cooperate, they are more likely to resolve conflict in a healthy way.

Wednesday, June 21, 2023

3 Strategies for Making Better, More Informed Decisions

Francesca Gina
Harvard Business Review
Originally published 25 May 23

Here is an excerpt:

Think counterfactually about previous decisions you’ve made.

Counterfactual thinking invites you to consider different courses of action you could have taken to gain a better understanding of the factors that influenced your choice. For example, if you missed a big deadline on a work project, you might reflect on how working harder, asking for help, or renegotiating the deadline could have affected the outcome. This reflection can help you recognize which factors played a significant role in your decision-making process — for example, valuing getting the project done on your own versus getting it done on time — and identify changes you might want to make when it comes to future decisions.

The 1998 movie Sliding Doors offers a great example of how counterfactual thinking can help us understand the forces that shape our decisions. The film explores two alternate storylines for the main character, Helen (played by Gwyneth Paltrow), based on whether she catches an upcoming subway train or misses it. While watching both storylines unfold, we gain insight into different factors that influence Helen’s life choices.

Similarly, engaging in counterfactual thinking can help you think through choices you’ve made by helping you expand your focus to consider multiple frames of reference beyond the present outcome. This type of reflection encourages you to take note of different perspectives and reach a more balanced view of your choices. By thinking counterfactually, you can ensure you are looking at existing data in a more unbiased way.

Challenge your assumptions.

You can also fight self-serving biases by actively seeking out information that challenges your beliefs and assumptions. This can be uncomfortable, as it could threaten your identity and worldview, but it’s a key step in developing a more nuanced and informed perspective.

One way to do this is to purposely expose yourself to different perspectives in order to broaden your understanding of an issue. Take Satya Nadella, the CEO of Microsoft. When he assumed the role in 2014, he recognized that the company’s focus on Windows and Office was limiting its growth potential. Not only did the company need a new strategy, he recognized that the culture needed to evolve as well.

In order to expand the company’s horizons, Nadella sought out talent from different backgrounds and industries, who brought with them a diverse range of perspectives. He also encouraged Microsoft employees to experiment and take risks, even if it meant failing along the way. By purposefully exposing himself and his team to different perspectives and new ideas, Nadella was able to transform Microsoft into a more innovative and customer-focused company, with a renewed focus on cloud computing and artificial intelligence.

Tuesday, June 20, 2023

Ethical Accident Algorithms for Autonomous Vehicles and the Trolley Problem: Three Philosophical Disputes

Sven Nyholm
In Lillehammer, H. (ed.), The Trolley Problem.
Cambridge: Cambridge University Press, 2023

Abstract

The Trolley Problem is one of the most intensively discussed and controversial puzzles in contemporary moral philosophy. Over the last half-century, it has also become something of a cultural phenomenon, having been the subject of scientific experiments, online polls, television programs, computer games, and several popular books. This volume offers newly written chapters on a range of topics including the formulation of the Trolley Problem and its standard variations; the evaluation of different forms of moral theory; the neuroscience and social psychology of moral behavior; and the application of thought experiments to moral dilemmas in real life. The chapters are written by leading experts on moral theory, applied philosophy, neuroscience, and social psychology, and include several authors who have set the terms of the ongoing debates. The volume will be valuable for students and scholars working on any aspect of the Trolley Problem and its intellectual significance.

Here is the conclusion:

Accordingly, it seems to me that just as the first methodological approach mentioned a few paragraphs above is problematic, so is the third methodological approach. In other words, we do best to take the second approach. We should neither rely too heavily (or indeed exclusively) on the comparison between the ethics of self-driving cars and the trolley problem, nor wholly ignore and pay no attention to the comparison between the ethics of self-driving cars and the trolley problem. Rather, we do best to make this one – but not the only – thing we do when we think about the ethics of self-driving cars. With what is still a relatively new issue for philosophical ethics to work with, and indeed also regarding older ethical issues that have been around much longer, using a mixed and pluralistic method that approaches the moral issues we are considering from many different angles is surely the best way to go. In this instance, that includes reflecting on – and reflecting critically on – how the ethics of crashes involving self-driving cars is both similar to and different from the philosophy of the trolley problem.

At this point, somebody might say, “what if I am somebody who really dislikes the self-driving cars/trolley problem comparison, and I would really prefer reflecting on the ethics of self-driving cars without spending any time on thinking about the similarities and differences between the ethics of self-driving cars and the trolley problem?” In other words, should everyone working on the ethics of self-driving cars spend at least some of their time reflecting on the comparison with the trolley problem? Luckily for those who are reluctant to spend any of their time reflecting on the self-driving cars/trolley problem comparison, there are others who are willing and able to devote at least some of their energies to this comparison.

In general, I think we should view the community that works on the ethics of this issue as being one in which there can be a division of labor, whereby different members of this field can partly focus on different things, and thereby together cover all of the different aspects that are relevant and important to investigate regarding the ethics of self-driving cars.  As it happens, there has been a remarkable variety in the methods and approaches people have used to address the ethics of self-driving cars (see Nyholm 2018 a-b).  So, while it is my own view that anybody who wants to form a complete overview of the ethics of self-driving cars should, among other things, devote some of their time to studying the comparison with the trolley problem, it is ultimately no big problem if not everyone wishes to do so. There are others who have been studying, and who will most likely continue to reflect on, this comparison.

Monday, June 19, 2023

On the origin of laws by natural selection

DeScioli, P.
Evolution and Human Behavior
Volume 44, Issue 3, May 2023, Pages 195-209

Abstract

Humans are lawmakers like we are toolmakers. Why do humans make so many laws? Here we examine the structure of laws to look for clues about how humans use them in evolutionary competition. We will see that laws are messages with a distinct combination of ideas. Laws are similar to threats but critical differences show that they have a different function. Instead, the structure of laws matches moral rules, revealing that laws derive from moral judgment. Moral judgment evolved as a strategy for choosing sides in conflicts by impartial rules of action—rather than by hierarchy or faction. For this purpose, humans can create endless laws to govern nearly any action. However, as prolific lawmakers, humans produce a confusion of contradictory laws, giving rise to a perpetual battle to control the laws. To illustrate, we visit some of the major conflicts over laws of violence, property, sex, faction, and power.

(cut)

Moral rules are not for cooperation

We have briefly summarized the  major divisions and operations of moral judgment. Why then did humans evolve such elaborate powers of the mind devoted to moral rules? What is all this rule making for?

One common opinion is that moral rules are for cooperation. That is, we make and enforce a moral code in order to cooperate more effectively with other people. Indeed, traditional  theories beginning with Darwin assume that morality is  the  same  as cooperation. These theories  successfully explain many forms of cooperation, such as why humans and other  animals  care  for  offspring,  trade  favors,  respect  property, communicate  honestly,  and  work  together  in  groups.  For  instance, theories of reciprocity explain why humans keep records of other people’s deeds in the form of reputation, why we seek partners who are nice, kind, and generous, why we praise these virtues, and why we aspire to attain them.

However, if we look closely, these theories explain cooperation, not moral  judgment.  Cooperation pertains  to our decisions  to  benefit  or harm someone, whereas moral judgment pertains to  our judgments of someone’s action  as right or  wrong. The difference  is crucial because these  mental  faculties  operate  independently  and  they  evolved  separately. For  instance,  people can  use moral judgment  to cooperate but also to cheat, such as a thief who hides the theft because they judge it to be  wrong, or a corrupt leader who invents a  moral rule  that forbids criticism of the leader. Likewise, people use moral judgment to benefit others  but  also  to  harm  them, such  as falsely  accusing an enemy of murder to imprison them. 

Regarding  their  evolutionary  history, moral  judgment is  a  recent adaptation while cooperation is ancient and widespread, some forms as old  as  the origins  of  life and  multicellular  organisms.  Recalling our previous examples, social animals like gorillas, baboons, lions, and hyenas cooperate in numerous ways. They care for offspring, share food, respect property, work together in teams, form reputations,  and judge others’ characters as nice or nasty. But these species do not communicate rules of action, nor do they learn, invent, and debate the rules. Like language, moral judgment  most likely evolved  recently in the  human lineage, long after complex forms of cooperation. 

From the Conclusion

Having anchored ourselves to concrete laws, we next asked, What are laws for? This is the central question for  any mental power because it persists only  by aiding an animal in evolutionary competition.  In this search,  we  should  not  be  deterred  by  the  magnificent creativity  and variety of laws. Some people suppose that natural selection could impart no more than  a  few fixed laws in  the  human mind, but there  are  no grounds for this supposition. Natural selection designed all life on Earth and its creativity exceeds our own. The mental adaptations of animals outperform our best computer programs on routine tasks such as loco-motion and vision. Why suppose that human laws must be far simpler than, for instance, the flight controllers in the brain of a hummingbird? And there are obvious counterexamples. Language is a complex  adaptation but this does not mean that humans speak just a few sentences. Tool use comes from mental adaptations including an intuitive theory of physics, and again these abilities do not limit but enable the enormous variety of tools.

Sunday, June 18, 2023

Gender-Affirming Care for Trans Youth Is Neither New nor Experimental: A Timeline and Compilation of Studies

Julia Serano
Medium.com
Originally posted 16 May 23

Trans and gender-diverse people are a pancultural and transhistorical phenomenon. It is widely understood that we, like LGBTQ+ people more generally, arise due to natural variation rather than the result of pathology, modernity, or the latest conspiracy theory.

Gender-affirming healthcare has a long history. The first trans-related surgeries were carried out in the 1910s–1930s (Meyerowitz, 2002, pp. 16–21). While some doctors were supportive early on, most were wary. Throughout the mid-twentieth century, these skeptical doctors subjected trans people to all sorts of alternate treatments — from perpetual psychoanalysis, to aversion and electroshock therapies, to administering assigned-sex-consistent hormones (e.g., testosterone for trans female/feminine people), and so on — but none of them worked. The only treatment that reliably allowed trans people to live happy and healthy lives was allowing them to transition. While doctors were initially worried that many would eventually come to regret that decision, study after study has shown that gender-affirming care has a far lower regret rate (typically around 1 or 2 percent) than virtually any other medical procedure. Given all this, plus the fact that there is no test for being trans (medical, psychological, or otherwise), around the turn of the century, doctors began moving away from strict gatekeeping and toward an informed consent model for trans adults to attain gender-affirming care.

Trans children have always existed — indeed most trans adults can tell you about their trans childhoods. During the twentieth century, while some trans kids did socially transition (Gill-Peterson, 2018), most had their gender identities disaffirmed, either by parents who disbelieved them or by doctors who subjected them to “gender reparative” or “conversion” therapies. The rationale behind the latter was a belief at that time that gender identity was flexible and subject to change during early childhood, but we now know that this is not true (see e.g., Diamond & Sigmundson, 1997; Reiner & Gearhart, 2004). Over the years, it became clear that these conversion efforts were not only ineffective, but they caused real harm — this is why most health professional organizations oppose them today.

Given the harm caused by gender-disaffirming approaches, around the turn of the century, doctors and gender clinics began moving toward what has come to be known as the gender affirmative model — here’s how I briefly described this approach in my 2016 essay Detransition, Desistance, and Disinformation: A Guide for Understanding Transgender Children Debates:

Rather than being shamed by their families and coerced into gender conformity, these children are given the space to explore their genders. If they consistently, persistently, and insistently identify as a gender other than the one they were assigned at birth, then their identity is respected, and they are given the opportunity to live as a member of that gender. If they remain happy in their identified gender, then they may later be placed on puberty blockers to stave off unwanted bodily changes until they are old enough (often at age sixteen) to make an informed decision about whether or not to hormonally transition. If they change their minds at any point along the way, then they are free to make the appropriate life changes and/or seek out other identities.

Saturday, June 17, 2023

Debt Collectors Want To Use AI Chatbots To Hustle People For Money

Corin Faife
vice.com
Originally posted 18 MAY 23

Here are two excerpts:

The prospect of automated AI systems making phone calls to distressed people adds another dystopian element to an industry that has long targeted poor and marginalized people. Debt collection and enforcement is far more likely to occur in Black communities than white ones, and research has shown that predatory debt and interest rates exacerbate poverty by keeping people trapped in a never-ending cycle. 

In recent years, borrowers in the US have been piling on debt. In the fourth quarter of 2022, household debt rose to a record $16.9 trillion according to the New York Federal Reserve, accompanied by an increase in delinquency rates on larger debt obligations like mortgages and auto loans. Outstanding credit card balances are at record levels, too. The pandemic generated a huge boom in online spending, and besides traditional credit cards, younger spenders were also hooked by fintech startups pushing new finance products, like the extremely popular “buy now, pay later” model of Klarna, Sezzle, Quadpay and the like.

So debt is mounting, and with interest rates up, more and more people are missing payments. That means more outstanding debts being passed on to collection, giving the industry a chance to sprinkle some AI onto the age-old process of prodding, coaxing, and pressuring people to pay up.

For an insight into how this works, we need look no further than the sales copy of companies that make debt collection software. Here, products are described in a mix of generic corp-speak and dystopian portent: SmartAction, another conversational AI product like Skit, has a debt collection offering that claims to help with “alleviating the negative feelings customers might experience with a human during an uncomfortable process”—because they’ll surely be more comfortable trying to negotiate payments with a robot instead. 

(cut)

“Striking the right balance between assertiveness and empathy is a significant challenge in debt collection,” the company writes in the blog post, which claims GPT-4 has the ability to be “firm and compassionate” with customers.

When algorithmic, dynamically optimized systems are applied to sensitive areas like credit and finance, there’s a real possibility that bias is being unknowingly introduced. A McKinsey report into digital collections strategies plainly suggests that AI can be used to identify and segment customers by risk profile—i.e. credit score plus whatever other data points the lender can factor in—and fine-tune contact techniques accordingly. 

Friday, June 16, 2023

ChatGPT Is a Plagiarism Machine

Joseph Keegin
The Chronicle
Originally posted 23 MAY 23

Here is an excerpt:

A meaningful education demands doing work for oneself and owning the product of one’s labor, good or bad. The passing off of someone else’s work as one’s own has always been one of the greatest threats to the educational enterprise. The transformation of institutions of higher education into institutions of higher credentialism means that for many students, the only thing dissuading them from plagiarism or exam-copying is the threat of punishment. One obviously hopes that, eventually, students become motivated less by fear of punishment than by a sense of responsibility for their own education. But if those in charge of the institutions of learning — the ones who are supposed to set an example and lay out the rules — can’t bring themselves to even talk about a major issue, let alone establish clear and reasonable guidelines for those facing it, how can students be expected to know what to do?

So to any deans, presidents, department chairs, or other administrators who happen to be reading this, here are some humble, nonexhaustive, first-aid-style recommendations. First, talk to your faculty — especially junior faculty, contingent faculty, and graduate-student lecturers and teaching assistants — about what student writing has looked like this past semester. Try to find ways to get honest perspectives from students, too; the ones actually doing the work are surely frustrated at their classmates’ laziness and dishonesty. Any meaningful response is going to demand knowing the scale of the problem, and the paper-graders know best what’s going on. Ask teachers what they’ve seen, what they’ve done to try to mitigate the possibility of AI plagiarism, and how well they think their strategies worked. Some departments may choose to take a more optimistic approach to AI chatbots, insisting they can be helpful as a student research tool if used right. It is worth figuring out where everyone stands on this question, and how best to align different perspectives and make allowances for divergent opinions while holding a firm line on the question of plagiarism.

Second, meet with your institution’s existing honor board (or whatever similar office you might have for enforcing the strictures of academic integrity) and devise a set of standards for identifying and responding to AI plagiarism. Consider simplifying the procedure for reporting academic-integrity issues; research AI-detection services and software, find one that works best for your institution, and make sure all paper-grading faculty have access and know how to use it.

Lastly, and perhaps most importantly, make it very, very clear to your student body — perhaps via a firmly worded statement — that AI-generated work submitted as original effort will be punished to the fullest extent of what your institution allows. Post the statement on your institution’s website and make it highly visible on the home page. Consider using this challenge as an opportunity to reassert the basic purpose of education: to develop the skills, to cultivate the virtues and habits of mind, and to acquire the knowledge necessary for leading a rich and meaningful human life.

Thursday, June 15, 2023

Moralization and extremism robustly amplify myside sharing

Marie, A, Altay, S., et al.
PNAS Nexus, Volume 2, Issue 4, April 2023.

Abstract

We explored whether moralization and attitude extremity may amplify a preference to share politically congruent (“myside”) partisan news and what types of targeted interventions may reduce this tendency. Across 12 online experiments (N = 6,989), we examined decisions to share news touching on the divisive issues of gun control, abortion, gender and racial equality, and immigration. Myside sharing was systematically observed and was consistently amplified when participants (i) moralized and (ii) were attitudinally extreme on the issue. The amplification of myside sharing by moralization also frequently occurred above and beyond that of attitude extremity. These effects generalized to both true and fake partisan news. We then examined a number of interventions meant to curb myside sharing by manipulating (i) the audience to which people imagined sharing partisan news (political friends vs. foes), (ii) the anonymity of the account used (anonymous vs. personal), (iii) a message warning against the myside bias, and (iv) a message warning against the reputational costs of sharing “mysided” fake news coupled with an interactive rating task. While some of those manipulations slightly decreased sharing in general and/or the size of myside sharing, the amplification of myside sharing by moral attitudes was consistently robust to these interventions. Our findings regarding the robust exaggeration of selective communication by morality and extremism offer important insights into belief polarization and the spread of partisan and false information online.

General discussion

Across 12 experiments (N = 6,989), we explored US participants’ intentions to share true and fake partisan news on 5 controversial issues—gun control, abortion, racial equality, sex equality, and immigration—in social media contexts. Our experiments consistently show that people have a strong sharing preference for politically congruent news—Democrats even more so than Republicans. They also demonstrate that this “myside” sharing is magnified when respondents see the issue as being of “absolute moral importance”, and when they have an extreme attitude on the issue. Moreover, issue moralization was found to amplify myside sharing above and beyond attitude extremity in the majority of the studies. Expanding prior research on selective communication, our work provides a clear demonstration that citizens’ myside communicational preference is powerfully amplified by their moral and political ideology (18, 19, 39–43).

By examining this phenomenon across multiple experiments varying numerous parameters, we demonstrated the robustness of myside sharing and of its amplification by participants’ issue moralization and attitude extremity. First, those effects were consistently observed on both true (Experiments 1, 2, 3, 5a, 6a, 7, and 10) and fake (Experiments 4, 5b, 6b, 8, 9, and 10) news stories and across distinct operationalizations of our outcome variable. Moreover, myside sharing and its amplification by issue moralization and attitude extremity were systematically observed despite multiple manipulations of the sharing context. Namely, those effects were observed whether sharing was done from one's personal or an anonymous social media account (Experiments 5a and 5b), whether the audience was made of political friends or foes (Experiments 6a and 6b), and whether participants first saw intervention messages warning against the myside bias (Experiments 7 and 8), or an interactive intervention warning against the reputational costs of sharing mysided falsehoods (Experiments 9 and 10).

Wednesday, June 14, 2023

Can Robotic AI Systems Be Virtuous and Why Does This Matter?

Constantinescu, M., Crisp, R. 
Int J of Soc Robotics 14, 
1547–1557 (2022).

Abstract

The growing use of social robots in times of isolation refocuses ethical concerns for Human–Robot Interaction and its implications for social, emotional, and moral life. In this article we raise a virtue-ethics-based concern regarding deployment of social robots relying on deep learning AI and ask whether they may be endowed with ethical virtue, enabling us to speak of “virtuous robotic AI systems”. In answering this question, we argue that AI systems cannot genuinely be virtuous but can only behave in a virtuous way. To that end, we start from the philosophical understanding of the nature of virtue in the Aristotelian virtue ethics tradition, which we take to imply the ability to perform (1) the right actions (2) with the right feelings and (3) in the right way. We discuss each of the three requirements and conclude that AI is unable to satisfy any of them. Furthermore, we relate our claims to current research in machine ethics, technology ethics, and Human–Robot Interaction, discussing various implications, such as the possibility to develop Autonomous Artificial Moral Agents in a virtue ethics framework.

Conclusion

AI systems are neither moody nor dissatisfied, and they do not want revenge, which seems to be an important advantage over humans when it comes to making various decisions, including ethical ones. However, from a virtue ethics point of view, this advantage becomes a major drawback. For this also means that they cannot act out of a virtuous character, either. Despite their ability to mimic human virtuous actions and even to function behaviourally in ways equivalent to human beings, robotic AI systems cannot perform virtuous actions in accordance with virtues, that is, rightly or virtuously; nor for the right reasons and motivations; nor through phronesis take into account the right circumstances. And this has the consequence that AI cannot genuinely be virtuous, at least not with the current technological advances supporting their functional development. Nonetheless, it might well be that the more we come to know about AI, the less we know about its future.Footnote9 We therefore leave open the possibility of AI systems being virtuous in some distant future. This might, however, require some disruptive, non-linear evolution that includes, for instance, the possibility that robotic AI systems fully deliberate over their own versus others' goals and happiness and make their own choices and priorities accordinglyFootnote10. Indeed, to be a virtuous agent one needs to have the possibility to make mistakes, to reason over virtuous and vicious lines of action. But then this raises a different question: are we prepared to experience interaction with vicious robotic AI systems?

Tuesday, June 13, 2023

Using the Veil of Ignorance to align AI systems with principles of justice

Weidinger, L. McKee, K.R., et al. (2023).
PNAS, 120(18), e2213709120

Abstract

The philosopher John Rawls proposed the Veil of Ignorance (VoI) as a thought experiment to identify fair principles for governing a society. Here, we apply the VoI to an important governance domain: artificial intelligence (AI). In five incentive-compatible studies (N = 2, 508), including two preregistered protocols, participants choose principles to govern an Artificial Intelligence (AI) assistant from behind the veil: that is, without knowledge of their own relative position in the group. Compared to participants who have this information, we find a consistent preference for a principle that instructs the AI assistant to prioritize the worst-off. Neither risk attitudes nor political preferences adequately explain these choices. Instead, they appear to be driven by elevated concerns about fairness: Without prompting, participants who reason behind the VoI more frequently explain their choice in terms of fairness, compared to those in the Control condition. Moreover, we find initial support for the ability of the VoI to elicit more robust preferences: In the studies presented here, the VoI increases the likelihood of participants continuing to endorse their initial choice in a subsequent round where they know how they will be affected by the AI intervention and have a self-interested motivation to change their mind. These results emerge in both a descriptive and an immersive game. Our findings suggest that the VoI may be a suitable mechanism for selecting distributive principles to govern AI.

Significance

The growing integration of Artificial Intelligence (AI) into society raises a critical question: How can principles be fairly selected to govern these systems? Across five studies, with a total of 2,508 participants, we use the Veil of Ignorance to select principles to align AI systems. Compared to participants who know their position, participants behind the veil more frequently choose, and endorse upon reflection, principles for AI that prioritize the worst-off. This pattern is driven by increased consideration of fairness, rather than by political orientation or attitudes to risk. Our findings suggest that the Veil of Ignorance may be a suitable process for selecting principles to govern real-world applications of AI.

From the Discussion section

What do these findings tell us about the selection of principles for AI in the real world? First, the effects we observe suggest that—even though the VoI was initially proposed as a mechanism to identify principles of justice to govern society—it can be meaningfully applied to the selection of governance principles for AI. Previous studies applied the VoI to the state, such that our results provide an extension of prior findings to the domain of AI. Second, the VoI mechanism demonstrates many of the qualities that we want from a real-world alignment procedure: It is an impartial process that recruits fairness-based reasoning rather than self-serving preferences. It also leads to choices that people continue to endorse across different contexts even where they face a self-interested motivation to change their mind. This is both functionally valuable in that aligning AI to stable preferences requires less frequent updating as preferences change, and morally significant, insofar as we judge stable reflectively endorsed preferences to be more authoritative than their nonreflectively endorsed counterparts. Third, neither principle choice nor subsequent endorsement appear to be particularly affected by political affiliation—indicating that the VoI may be a mechanism to reach agreement even between people with different political beliefs. Lastly, these findings provide some guidance about what the content of principles for AI, selected from behind a VoI, may look like: When situated behind the VoI, the majority of participants instructed the AI assistant to help those who were least advantaged.

Monday, June 12, 2023

Why some mental health professionals avoid self-care

Dattilio, F. M. (2023).
Journal of Consulting and Clinical Psychology, 
91(5), 251–253.
https://doi.org/10.1037/ccp0000818

Abstract

This article briefly discusses reasons why some mental health professionals are resistant to self-care. These reasons include the savior complex, avoidance, and lack of collegial assiduity. Several proposed solutions are offered.

Here is an excerpt:

Savior Complex

One hypothesis used to explain professionals’ resistance is what some refer to as a “savior complex.” Certain MHPs may be engaging in the cognitive distortion that it is their duty to save as many people from suffering and demise as they can and in turn need to sacrifice their own psychological welfare for those facing distress. MHPs may be skewed in their thinking that they are also invulnerable to psychological and other stressors. Inherent in this distortion is their fear of being viewed as weak or ineffective, and as a result, they overcompensate by attempting to be stronger than others. This type of thinking may also involve a defense mechanism that develops early in their professional lives and emerges during the course of their work in the field. This may stem from preexisting components of their personality dynamics. 

Another reason may be that the extreme rewards that professionals experience from helping others in such a desperate state of need serve as a euphoric experience for them that can be addictive. In essence, the “high” that they obtain from helping others often spurs them on.
Avoidance

Another less complicated explanation for MHPs’ blindness to their own vulnerabilities may be their strong desire to avoid admitting to their own weaknesses and sense of vulnerability. The defense mechanism of rationalization that they are stronger and healthier than everyone else may embolden them to push on even when there are visible signs to others of the stress in their lives that is compromising their functioning. 

Avoidance is also a way of sidestepping the obvious and putting it off until later. This may be coupled with the need that has increased, particularly with the recent pandemic that has intensified the demand for mental health services.

Denial

The dismissal of MHPs’ own needs or what some may term as, “denial” is a deeper aspect that goes hand-in-hand with cognitive distortions that develop with MHPs, but involve a more complex level of blindness to the obvious (Bearse et al., 2013). It may also serve as a way for professionals to devalue their own emotional and psychological challenges. 

Denial may also stem from an underlying fear of being determined as incapacitated or not up to the challenge by their colleagues and thus prohibited from returning to their work or having to face limitations or restrictions. It can sometimes emanate from the fear of being reported as having engaged in unethical behavior by not seeking assistance sooner. This is particularly so with cases of MHPs who have become involved with illicit drug or alcohol abuse or addiction. 

Most ethical principles mandate that MHPs strive to remain cognizant of the potential effects that their work has on their own physical and mental health status while they are in the process of treating others and to recognize when their ability to be effective has been compromised. 

Last, in some cases, MHPs’ denial can even be a response to genuine and accurately perceived expectations in a variety of work contexts where they do not have control over their schedules. This may occur more commonly with facilities or institutions that do not support the disclosure of vulnerability and stress. It is for the aforementioned reasons that the American and Canadian Psychological Associations as well as other mental health organizations have mandated special education on this topic in graduate training programs (American Psychiatric Association, 2013; Maranzan et al., 2018).

Lack of Collegial Assiduity

A final reason may involve a lack of collegial assiduity, where fellow MHPs observe their colleagues enduring signs of stress but fail to confront the individual of concern and alert them to the obvious. It is often very awkward and uncomfortable for a colleague to address this issue and risk rebuke or a negative outcome. As a result, they simply avoid it altogether, thus leaving the issue of concern unaddressed.

The article is paywalled here, which is a complete shame.  We need more access to self-care resources.

Sunday, June 11, 2023

Podcast: Ethics Education and the Impact of AI on Mental Health

Hi All-

I recently had the privilege of being interviewed on the Psyched To Practice podcast. During this wide-ranging and unscripted interview, Ray Christner, Paul Wagner, and I engage in an insightful discussion about ethics, ethical decision-making, morality, and the potential impact of artificial intelligence on the practice psychotherapy.

After sharing a limited biographical account of my personal journey towards becoming a clinical psychologist, we delve into various topics including ethical codes, decision science, and the significant role that morality plays in shaping the practice of clinical psychology.

While the interview has a duration of approximately one hour and 17 minutes, I recommend taking the time to listen to it, particularly if you are an early or mid-career mental health professional. The conversation offers valuable insights and perspectives that can greatly contribute to your professional growth and development.

I provide the references below to our discussion, in alphabetical order, not the order in which I spoke about it during the podcast.




Even though the podcast was not scripted, here is a reference list of ideas I addressed during the interview.

References

Baxter, R. (2023, June 8). Lawyer’s AI Blunder Shows Perils of ChatGPT in ‘Early Days.’ Bloomberg Law News. Retrieved from https://news.bloomberglaw.com/business-and-practice/lawyers-ai-blunder-shows-perils-of-chatgpt-in-early-days


Chen, J., Zhang, Y., Wang, Y., Zhang, Z., Zhang, X., & Li, J. (2023). Deep learning-guided discovery of an antibiotic targeting Acinetobacter baumannii. Nature Biotechnology, 31(6), 631-636. doi:10.1038/s41587-023-00949-7


Dillon, D, Tandon, N., Gu, Y., & Gray, K. (2023). Can AI language models replace human participants? Trends in Cognitive Sciences, in press.


Fowler, A. (2023, June 7). Artificial intelligence could help predict breast cancer risk. USA Today. Retrieved from https://www.usatoday.com/story/news/health/2023/06/07/artificial-intelligence-breast-cancer-risk-prediction/70297531007/


Haidt, J. (2012). The righteous mind: Why good people are divided by politics and religion. New York, NY: Pantheon Books.


Handelsman, M. M., Gottlieb, M. C., & Knapp, S. (2005). Training ethical psychologists: An acculturation model. Professional Psychology: Research and Practice, 36(1), 59–65. https://doi.org/10.1037/0735-7028.36.1.59


Heinlein, R. A. (1961). Stranger in a strange land. New York, NY: Putnam.


Knapp, S. J., & VandeCreek, L. D. (2006). Practical ethics for psychologists: A positive approach. Washington, DC: American Psychological Association.


MacIver M. B. (2022). Consciousness and inward electromagnetic field interactions. Frontiers in human neuroscience, 16, 1032339. https://doi.org/10.3389/fnhum.2022.1032339


Persson, G., Restori, K. H., Emdrup, J. H., Schussek, S., Klausen, M. S., Nicol, M. J., Katkere, B., Rønø, B., Kirimanjeswara, G., & Sørensen, A. B. (2023). DNA immunization with in silico predicted T-cell epitopes protects against lethal SARS-CoV-2 infection in K18-hACE2 mice. Frontiers in Immunology, 14, 1166546. doi:10.3389/fimmu.2023.1166546


Schwartz, S. H. (1992). Universalism-particularism: Values in the context of cultural evolution. In M. Zanna (Ed.), Advances in experimental social psychology (Vol. 25, pp. 1-65). New York, NY: Academic Press.