Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Thursday, November 30, 2023

The Cynical Genius Illusion: Exploring and Debunking Lay Beliefs About Cynicism and Competence

Stavrova, O., & Ehlebracht, D. (2019).
Personality & social psychology bulletin, 45(2),
254–269.

Abstract

Cynicism refers to a negative appraisal of human nature-a belief that self-interest is the ultimate motive guiding human behavior. We explored laypersons' beliefs about cynicism and competence and to what extent these beliefs correspond to reality. Four studies showed that laypeople tend to believe in cynical individuals' cognitive superiority. A further three studies based on the data of about 200,000 individuals from 30 countries debunked these lay beliefs as illusionary by revealing that cynical (vs. less cynical) individuals generally do worse on cognitive ability and academic competency tasks. Cross-cultural analyses showed that competent individuals held contingent attitudes and endorsed cynicism only if it was warranted in a given sociocultural environment. Less competent individuals embraced cynicism unconditionally, suggesting that-at low levels of competence-holding a cynical worldview might represent an adaptive default strategy to avoid the potential costs of falling prey to others' cunning.


Here is my summary:

This article explores the relationship between cynicism and competence. The authors find that people tend to believe that cynical people are more intelligent and competent than others. However, they also find that this belief is not supported by evidence. In fact, cynical people tend to perform worse on cognitive ability and academic competency tasks.

The authors suggest that the belief that cynical people are more intelligent and competent may be due to a number of factors, including:
  • The fact that cynical people are often seen as being more realistic and worldly.
  • The fact that cynical people are often more confident and assertive.
  • The fact that cynical people are often more successful in certain professions, such as law and business.
However, the authors argue that these factors do not necessarily mean that cynical people are more intelligent or competent. In fact, they suggest that cynicism may actually be a sign of low intelligence and competence.

Wednesday, November 29, 2023

A justification-suppression model of the expression and experience of prejudice

Crandall, C. S., & Eshleman, A. (2003).
Psychological bulletin, 129(3), 414–446.
https://doi.org/10.1037/0033-2909.129.3.414

Abstract

The authors propose a justification-suppression model (JSM), which characterizes the processes that lead to prejudice expression and the experience of one's own prejudice. They suggest that "genuine" prejudices are not directly expressed but are restrained by beliefs, values, and norms that suppress them. Prejudices are expressed when justifications (e.g., attributions, ideologies, stereotypes) release suppressed prejudices. The same process accounts for which prejudices are accepted into the self-concept The JSM is used to organize the prejudice literature, and many empirical findings are recharacterized as factors affecting suppression or justification, rather than directly affecting genuine prejudice. The authors discuss the implications of the JSM for several topics, including prejudice measurement, ambivalence, and the distinction between prejudice and its expression.


This is an oldie, but goodie!!  Here is my summary:

This article is about prejudice and the factors that influence its expression. The authors propose a justification-suppression model (JSM) to explain how prejudice is expressed. The JSM suggests that people have genuine prejudices that are not directly expressed. Instead, these prejudices are suppressed by people’s beliefs, values, and norms. Prejudice is expressed when justifications (e.g., attributions, ideologies, stereotypes) release suppressed prejudices.

The authors also discuss the implications of the JSM for prejudice measurement, ambivalence, and the distinction between prejudice and its expression.

Here are some key takeaways from the article:
  • Prejudice is a complex phenomenon that is influenced by a variety of factors, including individual beliefs, values, and norms, as well as social and cultural contexts.
  • People may have genuine prejudices that they do not directly express. These prejudices may be suppressed by people’s beliefs, values, and norms.
  • Prejudice is expressed when justifications (e.g., attributions, ideologies, stereotypes) release suppressed prejudices.
  • The JSM can be used to explain a wide range of findings on prejudice, including prejudice measurement, ambivalence, and the distinction between prejudice and its expression.

Tuesday, November 28, 2023

Ethics of psychotherapy rationing: A review of ethical and regulatory documents in Canadian professional psychology

Gower, H. K., & Gaine, G. S. (2023).
Canadian Psychology / Psychologie canadienne. 
Advance online publication.

Abstract

Ethical and regulatory documents in Canadian professional psychology were reviewed for principles and standards related to the rationing of psychotherapy. Despite Canada’s high per capita health care expenses, mental health in Canada receives relatively low funding. Further, surveys indicated that Canadians have unmet needs for psychotherapy. Effective and ethical rationing of psychological treatment is a necessity, yet the topic of rationing in psychology has received scant attention. The present study involved a qualitative review of codes of ethics, codes of conduct, and standards of practice documents for their inclusion of rationing principles and standards. Findings highlight the strengths and shortcomings of these documents related to guiding psychotherapy rationing. The discussion offers recommendations for revising these ethical and regulatory documents to promote more equitable and cost-effective use of limited psychotherapy resources in Canada.

Impact Statement

Canadian professional psychology regulatory documents contain limited reference to rationing imperatives, despite scarce psychotherapy resources. While the foundation of distributive justice is in place, rationing-specific principles, standards, and practices are required to foster the fair and equitable distribution of psychotherapy by Canadian psychologists.

From the recommendations:

Recommendations for Canadian Psychology Regulatory Documents
  1. Explicitly widen psychologists’ scope of concern to include not only current clients but also waiting clients and those who need treatment but face access barriers.
  2. Acknowledge the scarcity of health care resources (in public and private settings) and the high demand for psychology services (e.g., psychotherapy) and admonish inefficient and cost-ineffective use.
  3. Draw an explicit connection between the general principle of distributive justice and the specific practices related to rationing of psychology resources, including, especially, mitigation of biases likely to weaken ethical decision making.
  4. Encourage the use of outcome monitoring measures to aid relative utility calculations for triage and termination decisions and to ensure efficiency and distributive justice.
  5. Recommend advocacy by psychologists to address barriers to accessing needed services (e.g., psychotherapy), including promoting the cost effectiveness of psychotherapy as well as highlighting systemic barriers related to presenting problem, disability, ethnicity, race, gender, sexuality, or income.

Monday, November 27, 2023

Synthetic human embryos created in groundbreaking advance

Hannah Devlin
The Guardian
Originally posted 14 JUNE 23

Here is an excerpt:

“Our human model is the first three-lineage human embryo model that specifies amnion and germ cells, precursor cells of egg and sperm,” Żernicka-Goetz told the Guardian before the talk. “It’s beautiful and created entirely from embryonic stem cells.”

The development highlights how rapidly the science in this field has outpaced the law, and scientists in the UK and elsewhere are already moving to draw up voluntary guidelines to govern work on synthetic embryos. “If the whole intention is that these models are very much like normal embryos, then in a way they should be treated the same,” Lovell-Badge said. “Currently in legislation they’re not. People are worried about this.”

There is also a significant unanswered question on whether these structures, in theory, have the potential to grow into a living creature. The synthetic embryos grown from mouse cells were reported to appear almost identical to natural embryos. But when they were implanted into the wombs of female mice, they did not develop into live animals. In April, researchers in China created synthetic embryos from monkey cells and implanted them into the wombs of adult monkeys, a few of which showed the initial signs of pregnancy but none of which continued to develop beyond a few days. Scientists say it is not clear whether the barrier to more advanced development is merely technical or has a more fundamental biological cause.


Here is my summary:

Researchers used stem cells to create structures that resembled early-stage human embryos, with a beating heart and primitive brain-like structures.

The synthetic embryos could be used to study human development and to develop new treatments for infertility and miscarriage. However, the research also raises ethical concerns, as it is not clear whether the synthetic embryos should be considered the same as natural embryos.

Some bioethicists have argued that the synthetic embryos should be treated with the same respect as natural embryos, as they have the potential to develop into human beings. Others have argued that the synthetic embryos are not the same as natural embryos, as they were not created through the union of an egg and sperm.

The research has been welcomed by some scientists, who believe it has the potential to revolutionize our understanding of human development. However, other scientists have expressed concern about the ethical implications of the research.

Sunday, November 26, 2023

How robots can learn to follow a moral code

Neil Savage
Nature.com
Originally posted 26 OCT 23

Here is an excerpt:

Defining ethics

The ability to fine-tune an AI system’s behaviour to promote certain values has inevitably led to debates on who gets to play the moral arbiter. Vosoughi suggests that his work could be used to allow societies to tune models to their own taste — if a community provides examples of its moral and ethical values, then with these techniques it could develop an LLM more aligned with those values, he says. However, he is well aware of the possibility for the technology to be used for harm. “If it becomes a free for all, then you’d be competing with bad actors trying to use our technology to push antisocial views,” he says.

Precisely what constitutes an antisocial view or unethical behaviour, however, isn’t always easy to define. Although there is widespread agreement about many moral and ethical issues — the idea that your car shouldn’t run someone over is pretty universal — on other topics there is strong disagreement, such as abortion. Even seemingly simple issues, such as the idea that you shouldn’t jump a queue, can be more nuanced than is immediately obvious, says Sydney Levine, a cognitive scientist at the Allen Institute. If a person has already been served at a deli counter but drops their spoon while walking away, most people would agree it’s okay to go back for a new one without waiting in line again, so the rule ‘don’t cut the line’ is too simple.

One potential approach for dealing with differing opinions on moral issues is what Levine calls a moral parliament. “This problem of who gets to decide is not just a problem for AI. It’s a problem for governance of a society,” she says. “We’re looking to ideas from governance to help us think through these AI problems.” Similar to a political assembly or parliament, she suggests representing multiple different views in an AI system. “We can have algorithmic representations of different moral positions,” she says. The system would then attempt to calculate what the likely consensus would be on a given issue, based on a concept from game theory called cooperative bargaining.


Here is my summary:

Autonomous robots will need to be able to make ethical decisions in order to safely and effectively interact with humans and the world around them.

The article proposes a number of ways that robots can be taught to follow a moral code. One approach is to use supervised learning, in which robots are trained on a dataset of moral dilemmas and their corresponding solutions. Another approach is to use reinforcement learning, in which robots are rewarded for making ethical decisions and punished for making unethical decisions.

The article also discusses the challenges of teaching robots to follow a moral code. One challenge is that moral codes are often complex and nuanced, and it can be difficult to define them in a way that can be understood by a robot. Another challenge is that moral codes can vary across cultures, and it is important to develop robots that can adapt to different moral frameworks.

The article concludes by arguing that teaching robots to follow a moral code is an important ethical challenge that we need to address as we develop more sophisticated artificial intelligence systems.

Saturday, November 25, 2023

An autonomy-based approach to assisted suicide: a way to avoid the expressivist objection against assisted dying laws

Braun, E.
Journal of Medical Ethics 
2023;49:497-501

Abstract

In several jurisdictions, irremediable suffering from a medical condition is a legal requirement for access to assisted dying. According to the expressivist objection, allowing assisted dying for a specific group of persons, such as those with irremediable medical conditions, expresses the judgment that their lives are not worth living. While the expressivist objection has often been used to argue that assisted dying should not be legalised, I show that there is an alternative solution available to its proponents. An autonomy-based approach to assisted suicide regards the provision of assisted suicide (but not euthanasia) as justified when it is autonomously requested by a person, irrespective of whether this is in her best interests. Such an approach has been put forward by a recent judgment of the German Federal Constitutional Court, which understands assisted suicide as an expression of the person’s right to a self-determined death. It does not allow for beneficence-based restrictions regarding the person’s suffering or medical diagnosis and therefore avoids the expressivist objection. I argue that on an autonomy-based approach, assisted suicide should not be understood as a medical procedure but rather as the person’s autonomous action.

Conclusion

Assuming that the expressivist argument is valid, it only applies to (partly) beneficence-based approaches to assisted dying that require irremediable suffering. An autonomy-based approach to assisted suicide, as put forward by the German Federal Constitutional Court, avoids the expressivist objection. It understands
assisted suicide as an act justified by autonomy and does not imply objective judgments of whether the person’s life is worth living. I have argued that on an autonomy-based approach, assisted suicide should not be understood as a medical intervention but rather as an autonomous action that does not invoke
traditional medical principles such as beneficence.


Said differently: 

The article argues that an autonomy-based approach to assisted suicide can avoid the expressivist objection against assisted dying laws. The expressivist objection is the claim that assisted dying laws send the message that suicide is a good thing, which could lead to more people committing suicide. The author argues that this objection is not valid because autonomy is a fundamental value that should be respected, even if it means allowing people to die.  (Autonomy > beneficence)

Friday, November 24, 2023

UnitedHealth faces class action lawsuit over algorithmic care denials in Medicare Advantage plans

Casey Ross and Bob Herman
Statnews.com
Originally posted 14 Nov 23

A class action lawsuit was filed Tuesday against UnitedHealth Group and a subsidiary alleging that they are illegally using an algorithm to deny rehabilitation care to seriously ill patients, even though the companies know the algorithm has a high error rate.

The class action suit, filed on behalf of deceased patients who had a UnitedHealthcare Medicare Advantage plan and their families by the California-based Clarkson Law Firm, follows the publication of a STAT investigation Tuesday. The investigation, cited by the lawsuit, found UnitedHealth pressured medical employees to follow an algorithm, which predicts a patient’s length of stay, to issue payment denials to people with Medicare Advantage plans. Internal documents revealed that managers within the company set a goal for clinical employees to keep patients rehab stays within 1% of the days projected by the algorithm.

The lawsuit, filed in the U.S. District Court of Minnesota, accuses UnitedHealth and its subsidiary, NaviHealth, of using the computer algorithm to “systematically deny claims” of Medicare beneficiaries struggling to recover from debilitating illnesses in nursing homes. The suit also cites STAT’s previous reporting on the issue.

“The fraudulent scheme affords defendants a clear financial windfall in the form of policy premiums without having to pay for promised care,” the complaint alleges. “The elderly are prematurely kicked out of care facilities nationwide or forced to deplete family savings to continue receiving necessary care, all because an [artificial intelligence] model ‘disagrees’ with their real live doctors’ recommendations.”


Here are some of my concerns:

The use of algorithms in healthcare decision-making has raised a number of ethical concerns. Some critics argue that algorithms can be biased and discriminatory, and that they can lead to decisions that are not in the best interests of patients. Others argue that algorithms can lack transparency, and that they can make it difficult for patients to understand how decisions are being made.

The lawsuit against UnitedHealth raises a number of specific ethical concerns. First, the plaintiffs allege that UnitedHealth's algorithm is based on inaccurate and incomplete data. This raises the concern that the algorithm may be making decisions that are not based on sound medical evidence. Second, the plaintiffs allege that UnitedHealth has failed to adequately train its employees on how to use the algorithm. This raises the concern that employees may be making decisions that are not in the best interests of patients, either because they do not understand how the algorithm works or because they are pressured to deny claims.

The lawsuit also raises the general question of whether algorithms should be used to make healthcare decisions. Some argue that algorithms can be used to make more efficient and objective decisions than humans can. Others argue that algorithms are not capable of making complex medical decisions that require an understanding of the individual patient's circumstances.

The use of algorithms in healthcare is a complex issue with no easy answers. It is important to carefully consider the potential benefits and risks of using algorithms before implementing them in healthcare settings.

Thursday, November 23, 2023

How to Maintain Hope in an Age of Catastrophe

Masha Gessen
The Atlantic
Originally posted 12 Nov 23

Gessen interviews psychoanalyst and author Robert Jay Lifton.  Here is an excerpt from the beginning of the article/interview:

Lifton is fascinated by the range and plasticity of the human mind, its ability to contort to the demands of totalitarian control, to find justification for the unimaginable—the Holocaust, war crimes, the atomic bomb—and yet recover, and reconjure hope. In a century when humanity discovered its capacity for mass destruction, Lifton studied the psychology of both the victims and the perpetrators of horror. “We are all survivors of Hiroshima, and, in our imaginations, of future nuclear holocaust,” he wrote at the end of “Death in Life.” How do we live with such knowledge? When does it lead to more atrocities and when does it result in what Lifton called, in a later book, “species-wide agreement”?

Lifton’s big books, though based on rigorous research, were written for popular audiences. He writes, essentially, by lecturing into a Dictaphone, giving even his most ambitious works a distinctive spoken quality. In between his five large studies, Lifton published academic books, papers and essays, and two books of cartoons, “Birds” and “PsychoBirds.” (Every cartoon features two bird heads with dialogue bubbles, such as, “ ‘All of a sudden I had this wonderful feeling: I am me!’ ” “You were wrong.”) Lifton’s impact on the study and treatment of trauma is unparalleled. In a 2020 tribute to Lifton in the Journal of the American Psychoanalytic Association, his former colleague Charles Strozier wrote that a chapter in “Death in Life” on the psychology of survivors “has never been surpassed, only repeated many times and frequently diluted in its power. All those working with survivors of trauma, personal or sociohistorical, must immerse themselves in his work.”


Here is my summary of the article and helpful tips.  Happy (hopeful) Thanksgiving!!

Hope is not blind optimism or wishful thinking, but rather a conscious decision to act in the face of uncertainty and to believe in the possibility of a better future. The article/interview identifies several key strategies for cultivating hope, including:
  • Nurturing a sense of purpose: Having a clear sense of purpose can provide direction and motivation, even in the darkest of times. This purpose can be rooted in personal goals, relationships, or a commitment to a larger cause.
  • Engaging in meaningful action: Taking concrete steps, no matter how small, can help to combat feelings of helplessness and despair. Action can range from individual acts of kindness to participation in collective efforts for social change.
  • Cultivating a sense of community: Connecting with others who share our concerns can provide a sense of belonging and support. Shared experiences and collective action can amplify our efforts and strengthen our resolve.
  • Maintaining a critical perspective: While it is important to hold onto hope, it is also crucial to avoid complacency or denial. We need to recognize the severity of the challenges we face and to remain vigilant in our efforts to address them.
  • Embracing resilience: Hope is not about denying hardship or expecting a quick and easy resolution to our problems. Rather, it is about cultivating the resilience to persevere through difficult times and to believe in the possibility of positive change.

The article concludes by emphasizing the importance of hope as a driving force for positive change. Hope is not a luxury, but a necessity for survival and for building a better future. By nurturing hope, we can empower ourselves and others to confront the challenges we face and to work towards a more just and equitable world.

Wednesday, November 22, 2023

The case for partisan motivated reasoning

Williams, D.
Synthese 202, 89 (2023).

Abstract

A large body of research in political science claims that the way in which democratic citizens think about politics is motivationally biased by partisanship. Numerous critics argue that the evidence for this claim is better explained by theories in which party allegiances influence political cognition without motivating citizens to embrace biased beliefs. This article has three aims. First, I clarify this criticism, explain why common responses to it are unsuccessful, and argue that to make progress on this debate we need a more developed theory of the connections between group attachments and motivated reasoning. Second, I develop such a theory. Drawing on research on coalitional psychology and the social functions of beliefs, I argue that partisanship unconsciously biases cognition by generating motivations to advocate for party interests, which transform individuals into partisan press secretaries. Finally, I argue that this theory offers a superior explanation of a wide range of relevant findings than purely non-motivational theories of political cognition.

My summary:

Partisan motivated reasoning is the tendency for people to seek out and interpret information in a way that confirms their existing political beliefs. This is a complex phenomenon, but Williams argues that it can be explained by the combination of two factors:

  1. Group attachments: People are strongly motivated to defend and promote the interests of their social groups, including their political parties.
  2. Motivated cognition: People are motivated to believe things that are true, but they are also motivated to believe things that are consistent with their values and goals.
Williams argues that partisan motivated reasoning is a natural and predictable consequence of these two factors. When people are motivated to defend and promote their political party, they will be motivated to seek out and interpret information in a way that confirms their existing beliefs. They will also be motivated to downplay or ignore information that is inconsistent with their beliefs.

Williams provides a number of pieces of evidence to support his argument, including studies that show that people are more likely to believe information that is consistent with their political beliefs, even when that information is false. He also shows that people are more likely to seek out and consume information from sources that they agree with politically.

Williams concludes by arguing that partisan motivated reasoning is a serious problem for democracy. It can lead to people making decisions that are not in their own best interests, and it can make it difficult for people to have productive conversations about political issues.

Tuesday, November 21, 2023

Toward Parsimony in Bias Research: A Proposed Common Framework of Belief-Consistent Information Processing for a Set of Biases

Oeberst, A., & Imhoff, R. (2023).
Perspectives on Psychological Science, 0(0).

Abstract

One of the essential insights from psychological research is that people’s information processing is often biased. By now, a number of different biases have been identified and empirically demonstrated. Unfortunately, however, these biases have often been examined in separate lines of research, thereby precluding the recognition of shared principles. Here we argue that several—so far mostly unrelated—biases (e.g., bias blind spot, hostile media bias, egocentric/ethnocentric bias, outcome bias) can be traced back to the combination of a fundamental prior belief and humans’ tendency toward belief-consistent information processing. What varies between different biases is essentially the specific belief that guides information processing. More importantly, we propose that different biases even share the same underlying belief and differ only in the specific outcome of information processing that is assessed (i.e., the dependent variable), thus tapping into different manifestations of the same latent information processing. In other words, we propose for discussion a model that suffices to explain several different biases. We thereby suggest a more parsimonious approach compared with current theoretical explanations of these biases. We also generate novel hypotheses that follow directly from the integrative nature of our perspective.

Here is my summary:

The authors argue that many different biases, such as the bias blind spot, hostile media bias, egocentric/ethnocentric bias, and outcome bias, can be traced back to the combination of a fundamental prior belief and humans' tendency toward belief-consistent information processing.

Belief-consistent information processing is the process of attending to, interpreting, and remembering information in a way that is consistent with one's existing beliefs. This process can lead to biases when it results in people ignoring or downplaying information that is inconsistent with their beliefs, and giving undue weight to information that is consistent with their beliefs.

The authors propose that different biases can be distinguished by the specific belief that guides information processing. For example, the bias blind spot is characterized by the belief that one is less biased than others, while hostile media bias is characterized by the belief that the media is biased against one's own group. However, the authors also argue that different biases may share the same underlying belief, and differ only in the specific outcome of information processing that is assessed. For example, both the bias blind spot and hostile media bias may involve the belief that one is more objective than others, but the bias blind spot is assessed in the context of self-evaluations, while hostile media bias is assessed in the context of evaluations of others.

The authors' framework has several advantages over existing theoretical explanations of biases. First, it provides a more parsimonious explanation for a wide range of biases. Second, it generates novel hypotheses that can be tested empirically. For example, the authors hypothesize that people who are more likely to believe in one bias will also be more likely to believe in other biases. Third, the framework has implications for interventions to reduce biases. For example, the authors suggest that interventions to reduce biases could focus on helping people to become more aware of their own biases and to develop strategies for resisting the tendency toward belief-consistent information processing.

Monday, November 20, 2023

Shifting evaluative construal: Common and distinct neural components of moral, pragmatic, and hedonic evaluations

Pretus, C., Swencionis, J. K., et al.
(2023, August 28). 

Abstract

People generate evaluations of different attitude objects based on their goals and aspects of the social context. Prior research suggests that people can shift between at least three types of evaluations to judge whether something is good or bad: pragmatic (how costly or beneficial it is), moral (whether it’s aligned with moral norms), and hedonic (whether it feels good; Van Bavel et al., 2012). The current research examined the neurocognitive computations underlying these types of evaluations to understand how people construct affective judgments. Specifically, we examined whether different types of evaluations stem from a common neural evaluation system that incorporates different information in response to changing evaluation goals (moral, pragmatic, or hedonic), or distinct evaluation systems with different neurofunctional architectures. We found support for a hybrid evaluation system in which people rely on a set of brain regions to construct all three forms of evaluation, but recruit additional distinct regions for each type of evaluation. The three types of evaluations all relied on common neural activity in affective structures such as the amygdala, the insula, and the hippocampus. However, moral evaluations involved greater neural activation in the orbitofrontal and cingulate cortex compared to pragmatic evaluations, and temporoparietal regions compared to hedonic evaluations. These results suggest that people use a hybrid system that includes common evaluation components as well as distinct ones to generate moral judgments.

Here is my summary:

The research found that people draw on a hybrid evaluation system to generate moral, pragmatic, and hedonic evaluations. This system involves a set of brain regions that are common to all three types of evaluations, as well as distinct regions that are specific to each type of evaluation.

The study used fMRI to examine the neural correlates of moral, pragmatic, and hedonic evaluations in participants who were instructed to make each type of evaluation on a set of stimuli. The stimuli included moral dilemmas, practical decisions, and objects that were associated with different levels of hedonic pleasure.

The results showed that all three types of evaluations were associated with common neural activity in affective structures such as the amygdala, insula, and hippocampus. However, moral evaluations involved greater neural activation in the orbitofrontal and cingulate cortex than pragmatic evaluations, and in temporoparietal regions than hedonic evaluations.

These findings suggest that people use a hybrid system to generate moral judgments. This system includes common evaluation components that are involved in processing affective information, as well as distinct components that are specialized for processing moral information.

The research has implications for our understanding of how people make moral decisions. It suggests that moral decisions are not made in isolation from other types of decisions, but rather are influenced by a common evaluation system that also plays a role in pragmatic and hedonic evaluations. The research also suggests that moral decisions may be influenced by distinct neural components that are specialized for processing moral information.

Sunday, November 19, 2023

AI Will—and Should—Change Medical School, Says Harvard’s Dean for Medical Education

Hswen Y, Abbasi J.
JAMA. Published online October 25, 2023.

Here is an excerpt:

Dr Bibbins-Domingo: When these types of generative AI tools first came into prominence or awareness, educators, whatever level of education they were involved with, had to scramble because their students were using them. They were figuring out how to put up the right types of guardrails, set the right types of rules. Are there rules or danger zones right now that you’re thinking about?

Dr Chang: Absolutely, and I think there’s quite a number of these. This is a focus that we’re embarking on right now because as exciting as the future is and as much potential as these generative AI tools have, there are also dangers and there are also concerns that we have to address.

One of them is helping our students, who like all of us are still new to this within the past year, understand the limitations of these tools. Now these tools are going to get better year after year after year, but right now they are still prone to hallucinations, or basically making up facts that aren’t really true and yet saying them with confidence. Our students need to recognize why it is that these tools might come up with those hallucinations to try to learn how to recognize them and to basically be on guard for the fact that just because ChatGPT is giving you a very confident answer, it doesn’t mean it’s the right answer. And in medicine of course, that’s very, very important. And so that’s one—just the accuracy and the validity of the content that comes out.

As I wrote about in my Viewpoint, the way that these tools work is basically a very fancy form of autocomplete, right? It is essentially using a probabilistic prediction of what the next word is going to be. And so there’s no separate validity or confirmation of the factual material, and that’s something that we need to make sure that our students understand.

The other thing is to address the fact that these tools may inherently be structurally biased. Now, why would that be? Well, as we know, ChatGPT and these other large language models [LLMs] are trained on the world’s internet, so to speak, right? They’re trained on the noncopyrighted corpus of material that’s out there on the web. And to the extent that that corpus of material was generated by human beings who in their postings and their writings exhibit bias in one way or the other, whether intentionally or not, that’s the corpus on which these LLMs are trained. So it only makes sense that when we use these tools, these tools are going to potentially exhibit evidence of bias. And so we need our students to be very aware of that. As we have worked to reduce the effects of systematic bias in our curriculum and in our clinical sphere, we need to recognize that as we introduce this new tool, this will be another potential source of bias.


Here is my summary:

Bernard Chang, the Dean for Medical Education at Harvard Medical School, argues that artificial intelligence (AI) is poised to transform medical education. AI has the potential to improve the way medical students learn and train, and that medical schools should not only embrace AI, but also take an active role in shaping its development and use.

Chang identifies several areas where AI could have a significant impact on medical education. First, AI could be used to personalize learning and provide students with more targeted feedback. For example, AI-powered tutors could help students learn complex medical concepts at their own pace, and AI-powered diagnostic tools could help students practice their clinical skills.

Second, AI could be used to automate tasks that are currently performed by human instructors, such as grading exams and providing feedback on student assignments. This would free up instructors to focus on more high-value activities, such as mentoring students and leading discussions.

Third, AI could be used to create new educational experiences that are not possible with traditional methods. For example, AI could be used to create virtual patients that students can interact with to practice their clinical skills. AI could also be used to develop simulations of complex medical procedures that students can practice in a safe environment.

Chang argues that medical schools have a responsibility to prepare students for the future of medicine, which will be increasingly reliant on AI. He writes that medical schools should teach students how to use AI effectively, and how to critically evaluate AI-generated information. Medical schools should also develop new curricula that take into account the potential impact of AI on medical practice.

Saturday, November 18, 2023

Resolving the battle of short- vs. long-term AI risks

Sætra, H.S., Danaher, J.
AI Ethics (2023).

Abstract

AI poses both short- and long-term risks, but the AI ethics and regulatory communities are struggling to agree on how to think two thoughts at the same time. While disagreements over the exact probabilities and impacts of risks will remain, fostering a more productive dialogue will be important. This entails, for example, distinguishing between evaluations of particular risks and the politics of risk. Without proper discussions of AI risk, it will be difficult to properly manage them, and we could end up in a situation where neither short- nor long-term risks are managed and mitigated.


Here is my summary:

Artificial intelligence (AI) poses both short- and long-term risks, but the AI ethics and regulatory communities are struggling to agree on how to prioritize these risks. Some argue that short-term risks, such as bias and discrimination, are more pressing and should be addressed first, while others argue that long-term risks, such as the possibility of AI surpassing human intelligence and becoming uncontrollable, are more serious and should be prioritized.

Sætra and Danaher argue that it is important to consider both short- and long-term risks when developing AI policies and regulations. They point out that short-term risks can have long-term consequences, and that long-term risks can have short-term impacts. For example, if AI is biased against certain groups of people, this could lead to long-term inequality and injustice. Conversely, if we take steps to mitigate long-term risks, such as by developing safety standards for AI systems, this could also reduce short-term risks.

Sætra and Danaher offer a number of suggestions for how to better balance short- and long-term AI risks. One suggestion is to develop a risk matrix that categorizes risks by their impact and likelihood. This could help policymakers to identify and prioritize the most important risks. Another suggestion is to create a research agenda that addresses both short- and long-term risks. This would help to ensure that we are investing in the research that is most needed to keep AI safe and beneficial.

Friday, November 17, 2023

Humans feel too special for machines to score their morals

Purcell, Z. A., & Jean‐François Bonnefon. (2023).
PNAS Nexus, 2(6).

Abstract

Artificial intelligence (AI) can be harnessed to create sophisticated social and moral scoring systems—enabling people and organizations to form judgments of others at scale. However, it also poses significant ethical challenges and is, subsequently, the subject of wide debate. As these technologies are developed and governing bodies face regulatory decisions, it is crucial that we understand the attraction or resistance that people have for AI moral scoring. Across four experiments, we show that the acceptability of moral scoring by AI is related to expectations about the quality of those scores, but that expectations about quality are compromised by people's tendency to see themselves as morally peculiar. We demonstrate that people overestimate the peculiarity of their moral profile, believe that AI will neglect this peculiarity, and resist for this reason the introduction of moral scoring by AI.‌

Significance Statement

The potential use of artificial intelligence (AI) to create sophisticated social and moral scoring systems poses significant ethical challenges. To inform the regulation of this technology, it is critical that we understand the attraction or resistance that people have for AI moral scoring. This project develops that understanding across four empirical studies—demonstrating that people overestimate the peculiarity of their moral profile, believe that AI will neglect this peculiarity, and resist for this reason the introduction of moral scoring by AI.

The link to the research is above.

My summary:

Here is another example of "myside bias" in which humans base decisions based on their uniqueness or better than average hypothesis.  This research study investigated whether people would accept AI moral scoring systems. The study found that people are unlikely to accept such systems, in large part because they feel too special for machines to score their personal morals.

Specifically, the results showed that people were more likely to accept AI moral scoring systems if they believed that the systems were accurate. However, even if people believed that the systems were accurate, they were still less likely to accept them if they believed that they were morally unique.

The study's authors suggest that these findings may be due to the fact that people have a strong need to feel unique and special. They also suggest that people may be hesitant to trust AI systems to accurately assess their moral character.

Key findings:
  • People are unlikely to accept AI moral scoring systems, in large part because they feel too special for machines to score their personal morals.
  • People's willingness to accept AI moral scoring is influenced by two factors: their perceived accuracy of the system and their belief that they are morally unique.
  • People are more likely to accept AI moral scoring systems if they believe that the systems are accurate. However, even if people believe that the systems are accurate, they are still less likely to accept them if they believe that they are morally unique.

Thursday, November 16, 2023

Minds of machines: The great AI consciousness conundrum

Grace Huckins
MIT Technology Review
Originally published 16 October 23

Here is an excerpt:

At the breakneck pace of AI development, however, things can shift suddenly. For his mathematically minded audience, Chalmers got concrete: the chances of developing any conscious AI in the next 10 years were, he estimated, above one in five.

Not many people dismissed his proposal as ridiculous, Chalmers says: “I mean, I’m sure some people had that reaction, but they weren’t the ones talking to me.” Instead, he spent the next several days in conversation after conversation with AI experts who took the possibilities he’d described very seriously. Some came to Chalmers effervescent with enthusiasm at the concept of conscious machines. Others, though, were horrified at what he had described. If an AI were conscious, they argued—if it could look out at the world from its own personal perspective, not simply processing inputs but also experiencing them—then, perhaps, it could suffer.

AI consciousness isn’t just a devilishly tricky intellectual puzzle; it’s a morally weighty problem with potentially dire consequences. Fail to identify a conscious AI, and you might unintentionally subjugate, or even torture, a being whose interests ought to matter. Mistake an unconscious AI for a conscious one, and you risk compromising human safety and happiness for the sake of an unthinking, unfeeling hunk of silicon and code. Both mistakes are easy to make. “Consciousness poses a unique challenge in our attempts to study it, because it’s hard to define,” says Liad Mudrik, a neuroscientist at Tel Aviv University who has researched consciousness since the early 2000s. “It’s inherently subjective.”


Here is my take.

There is an ongoing debate about whether artificial intelligence can ever become conscious or have subjective experiences like humans. Some argue AI will inevitably become conscious as it advances, while others think consciousness requires biological qualities that AI lacks.

Philosopher David Chalmers has proposed a "hard problem of consciousness" - explaining how physical processes in the brain give rise to subjective experience. This issue remains unresolved.

AI systems today show no signs of being conscious or having experiences. But some argue as AI becomes more sophisticated, we may need to consider whether it could develop some level of consciousness.
Approaches like deep learning and neural networks are fueling major advances in narrow AI, but this type of statistical pattern recognition does not seem sufficient to produce consciousness.

Questions remain about whether artificial consciousness is possible or how we could detect if an AI system were to become conscious. There are also ethical implications regarding the rights of conscious AI.

Overall there is much speculation but no consensus on whether artificial general intelligence could someday become conscious like humans are. The answer awaits theoretical and technological breakthroughs.

Wednesday, November 15, 2023

Private UK health data donated for medical research shared with insurance companies

Shanti Das
The Guardian
Originally poste 12 Nov 23

Sensitive health information donated for medical research by half a million UK citizens has been shared with insurance companies despite a pledge that it would not be.

An Observer investigation has found that UK Biobank opened up its vast biomedical database to insurance sector firms several times between 2020 and 2023. The data was provided to insurance consultancy and tech firms for projects to create digital tools that help insurers predict a person’s risk of getting a chronic disease. The findings have raised concerns among geneticists, data privacy experts and campaigners over vetting and ethical checks at Biobank.

Set up in 2006 to help researchers investigating diseases, the database contains millions of blood, saliva and urine samples, collected regularly from about 500,000 adult volunteers – along with medical records, scans, wearable device data and lifestyle information.

Approved researchers around the world can pay £3,000 to £9,000 to access records ranging from medical history and lifestyle information to whole genome sequencing data. The resulting research has yielded major medical discoveries and led to Biobank being considered a “jewel in the crown” of British science.

Biobank said it strictly guarded access to its data, only allowing access by bona fide researchers for health-related projects in the public interest. It said this included researchers of all stripes, whether employed by academic, charitable or commercial organisations – including insurance companies – and that “information about data sharing was clearly set out to participants at the point of recruitment and the initial assessment”.


Here is my summary:

Private health data donated by over half a million UK citizens for medical research has been shared with insurance companies, despite a pledge that it would not be used for this purpose. The data, which includes genetic information, medical diagnoses, and lifestyle factors, has been used to develop digital tools that help insurers predict a person's risk of getting a chronic disease. This raises concerns about the privacy and security of sensitive health data, as well as the potential for insurance companies to use the data to discriminate against people with certain health conditions.

Tuesday, November 14, 2023

How important is the end of humanity? Lay people prioritize extinction prevention but not above all other societal issues

Coleman, M. B., Caviola, L., et al.
(2023, October 21). 

Abstract

Human extinction would mean the deaths of eight billion people and the end of humanity’s achievements, culture, and future potential. On several ethical views, extinction would be a terrible outcome. How do people think about human extinction? And how much do they prioritize preventing extinction over other societal issues? Across six empirical studies (N = 2,541; U.S. and China) we find that people consider extinction prevention a global priority and deserving of greatly increased societal resources. However, despite estimating the likelihood of human extinction to be 5% this century (U.S. median), people believe the odds would need to be around 30% for it to be the very highest priority. Consequently, people consider extinction prevention to be only one among several important societal issues. People’s judgments about the relative importance of extinction prevention appear relatively fixed and are hard to change by reason-based interventions.


Here is my take:

The study found that lay people rated extinction prevention as more important than addressing climate change, poverty, and inequality. However, they rated extinction prevention as less important than promoting peace and security, and ensuring the well-being of future generations.

The study's authors suggest that these findings may be due to the fact that lay people perceive extinction prevention as a more existential threat than other societal issues. They also suggest that lay people may be more likely to prioritize extinction prevention if they believe that it is achievable.

Key findings:
  • Lay people prioritize extinction prevention, but not above all other societal issues.
  • Lay people rated extinction prevention as more important than addressing climate change, poverty, and inequality.
  • Lay people rated extinction prevention as less important than promoting peace and security, and ensuring the well-being of future generations.
  • The study's authors suggest that these findings may be due to the fact that lay people perceive extinction prevention as a more existential threat than other societal issues.

Monday, November 13, 2023

Prosociality should be a public health priority

Kubzansky, L.D., Epel, E.S. & Davidson, R.J. 
Nat Hum Behav (2023).
https://doi.org/10.1038/s41562-023-01717-3

Standfirst:

Hopelessness and despair threaten health and longevity. We urgently need strategies to counteract these effects and improve population health. Prosociality contributes to better mental and physical health for individuals, and for the communities in which they live. We propose that prosociality should be a public health priority.

Comment:

The COVID-19 pandemic produced high levels of stress, loneliness, and mental health problems, magnifying global trends in health disparities.1 Hopelessness and despair are growing problems particularly in the U.S. The sharp increase in rates of poor mental health is problematic in its own right, but poor mental health also contributes to greater morbidity and mortality. Without action, we will see steep declines in global population health and related costs to society. An approach that is “more of the same” is insufficient to stem the cascading effects of emotional ill-being. Something new is desperately needed.

To this point, recent work called on the discipline of psychiatry to contribute more meaningfully to the deaths of despair framework (i.e., conceptualizing rises in suicide, drug poisoning and alcoholic liver disease as due to misery of difficult social and economic circumstances).2 Recognizing that simply expanding mental health services cannot address the problem, the authors noted the importance of population-level prevention and targeting macro-level causes for intervention. This requires identifying upstream factors causally related to these deaths. However, factors explaining population health trends are poorly delineated and focus on risks and deficits (e.g., adverse childhood experiences, unemployment). A ‘deficit-based’ approach has limits as the absence of a risk factor does not inevitably indicate presence of a protective asset; we also need an ‘assetbased’ approach to understanding more comprehensively the forces that shape good health and buffer harmful effects of stress and adversity.


My take:

Prosociality refers to positive behaviors and beliefs that benefit others. It is a broad concept that encompasses many different qualities, such as altruism, trust, reciprocity, compassion, and empathy.

Research has shown that prosociality has a number of benefits for both individuals and communities. For individuals, prosociality can lead to improved mental and physical health, greater life satisfaction, and stronger social relationships. For communities, prosociality can lead to increased trust and cooperation, reduced crime rates, and improved overall well-being.

The authors of the article argue that prosociality should be a public health priority. They point out that prosociality can help to address a number of major public health challenges, such as loneliness, social isolation, and mental illness. They also argue that prosociality can help to build stronger communities and create a more just and equitable society.

Sunday, November 12, 2023

Ignorance by Choice: A Meta-Analytic Review of the Underlying Motives of Willful Ignorance and Its Consequences

Vu, L., Soraperra, I., Leib, M., et al. (2023).
Psychological Bulletin, 149(9-10), 611–635.
https://doi.org/10.1037/bul0000398

Abstract

People sometimes avoid information about the impact of their actions as an excuse to be selfish. Such “willful ignorance” reduces altruistic behavior and has detrimental effects in many consumer and organizational contexts. We report the first meta-analysis on willful ignorance, testing the robustness of its impact on altruistic behavior and examining its underlying motives. We analyze 33,603 decisions made by 6,531 participants in 56 different treatment effects, all employing variations of an experimental paradigm assessing willful ignorance. Meta-analytic results reveal that 40% of participants avoid easily obtainable information about the consequences of their actions on others, leading to a 15.6-percentage point decrease in altruistic behavior compared to when information is provided. We discuss the motives behind willful ignorance and provide evidence consistent with excuse-seeking behaviors to maintain a positive self-image. We investigate the moderators of willful ignorance and address the theoretical, methodological, and practical implications of our findings on who engages in willful ignorance, as well as when and why.

Public Significance Statement

We present the first meta-analysis on willful ignorance—when individuals avoid information about the negative consequences of their actions to maximize personal outcomes—covering 33,603 decisions made by 6,531 participants across 56 treatment effects. Results demonstrate that the ability to avoid such information decreases altruistic behavior, and that seemingly altruistic behavior may not reflect a true concern for others.


Key findings of the meta-analysis include:

Prevalence of Willful Ignorance: Approximately 40% of participants in the analyzed studies chose to avoid learning about the negative impact of their actions on others.

Impact on Altruism: Willful ignorance significantly reduces altruistic behavior. When provided with information about the consequences of their actions, participants were 15.6 percentage points more likely to engage in altruistic acts compared to those who chose to remain ignorant.

Motives for Willful Ignorance: The study suggests that willful ignorance may serve as a self-protective mechanism to maintain a positive self-image. By avoiding information about the harm caused by their actions, individuals can protect their self-perception as moral and ethical beings.

Saturday, November 11, 2023

One of the top concerns is moral decline of today’s youth, survey

Valerie Pritchett
27ABC News
Originally published 9 NOV 23

Yes, I do TV interviews as well.  The video plays after the commercial.


Discordant benevolence: How and why people help others in the face of conflicting values.

Cowan, S. K., Bruce, T. C., et al. (2022).
Science Advances, 8(7).

Abstract

What happens when a request for help from friends or family members invokes conflicting values? In answering this question, we integrate and extend two literatures: support provision within social networks and moral decision-making. We examine the willingness of Americans who deem abortion immoral to help a close friend or family member seeking one. Using data from the General Social Survey and 74 in-depth interviews from the National Abortion Attitudes Study, we find that a substantial minority of Americans morally opposed to abortion would enact what we call discordant benevolence: providing help when doing so conflicts with personal values. People negotiate discordant benevolence by discriminating among types of help and by exercising commiseration, exemption, or discretion. This endeavor reveals both how personal values affect social support processes and how the nature of interaction shapes outcomes of moral decision-making.

Here is my summary:

Using data from the General Social Survey and 74 in-depth interviews from the National Abortion Attitudes Study, the authors find that a substantial minority of Americans morally opposed to abortion would enact discordant benevolence. They also find that people negotiate discordant benevolence by discriminating among types of help and by exercising commiseration, exemption, or discretion.

Commiseration involves understanding and sharing the other person's perspective, even if one does not agree with it. Exemption involves excusing oneself from helping, perhaps by claiming ignorance or lack of resources. Discretion involves helping in a way that minimizes the conflict with one's own values, such as by providing emotional support or practical assistance but not financial assistance.

The authors argue that discordant benevolence is a complex phenomenon that reflects the interplay of personal values, social relationships, and moral decision-making. They conclude that discordant benevolence is a significant form of social support, even in cases where it is motivated by conflicting values.

In other words, the research suggests that people are willing to help others in need, even if it means violating their own personal values. This is because people also value social relationships and helping others. They may do this by discriminating among types of help or by exercising commiseration, exemption, or discretion.

Friday, November 10, 2023

Attitudes in an interpersonal context: Psychological safety as a route to attitude change

Itzchakov, G., & DeMarree, K. G. (2022).
Frontiers in Psychology, 13.

Abstract

Interpersonal contexts can be complex because they can involve two or more people who are interdependent, each of whom is pursuing both individual and shared goals. Interactions consist of individual and joint behaviors that evolve dynamically over time. Interactions are likely to affect people’s attitudes because the interpersonal context gives conversation partners a great deal of opportunity to intentionally or unintentionally influence each other. However, despite the importance of attitudes and attitude change in interpersonal interactions, this topic remains understudied. To shed light on the importance of this topic. We briefly review the features of interpersonal contexts and build a case that understanding people’s sense of psychological safety is key to understanding interpersonal influences on people’s attitudes. Specifically, feeling psychologically safe can make individuals more open-minded, increase reflective introspection, and decrease defensive processing. Psychological safety impacts how individuals think, make sense of their social world, and process attitude-relevant information. These processes can result in attitude change, even without any attempt at persuasion. We review the literature on interpersonal threats, receiving psychological safety, providing psychological safety, and interpersonal dynamics. We then detail the shortcomings of current approaches, highlight unanswered questions, and suggest avenues for future research that can contribute in developing this field.


This is part of the reason psychotherapy works.

My summary:

Attitudes are evaluations of people, objects, or ideas, and they can be influenced by a variety of factors, including interpersonal interactions. Psychological safety is a climate in which individuals feel safe to take risks, make mistakes, and be vulnerable. When people feel psychologically safe, they are more likely to express their true thoughts and feelings, which can lead to attitude change.

There are a number of ways that psychological safety can promote attitude change. First, feeling psychologically safe can make people more open-minded. When people feel safe, they are more likely to consider new information and perspectives, even if they challenge their existing beliefs. Second, psychological safety can increase reflective introspection. When people feel safe to be vulnerable, they are more likely to reflect on their own thoughts and feelings, which can lead to deeper insights and changes in attitude. Third, psychological safety can decrease defensive processing. When people feel safe, they are less likely to feel threatened by new information or perspectives, which makes them more open to considering them.

Research has shown that psychological safety can lead to attitude change in a variety of interpersonal contexts, including romantic relationships, friendships, and work teams. For example, one study found that couples who felt psychologically safe in their relationships were more likely to change their attitudes towards each other over time. Another study found that employees who felt psychologically safe in their teams were more likely to change their attitudes towards diversity and inclusion.

Thursday, November 9, 2023

Moral Future-Thinking: Does the Moral Circle Stand the Test of Time?

Law, K. F., Syropoulos, S., et al. (2023, August 10). 
PsyArXiv

Abstract

The long-term collective welfare of humanity may lie in the hands of those who are presently living. But do people normatively include future generations in their moral circles? Across four studies conducted on Prolific Academic (N Total=823), we find evidence for a progressive decline in the subjective moral standing of future generations, demonstrating decreasing perceived moral obligation, moral concern, and prosocial intentions towards other people with increasing temporal distance. While participants generally tend to display present-oriented moral preferences, we also reveal individual differences that mitigate this tendency and predict pro-future outcomes, including individual variation in longtermism beliefs and the vividness of one’s imagination. Our studies reconcile conflicting evidence in the extant literature on moral judgment and future-thinking, shed light on the role of temporal distance in moral circle expansion, and offer practical implications for better valuing and safeguarding the shared future of humanity.

Here's my summary:

This research investigates whether people normatively include future generations in their moral circles. The authors conducted four studies with a total of 823 participants, and found evidence for a progressive decline in the subjective moral standing of future generations with increasing temporal distance. This suggests that people generally tend to display present-oriented moral preferences.

However, the authors also found individual differences that mitigate this tendency and predict pro-future outcomes. These factors include individual variation in longtermism beliefs and the vividness of one's imagination. The authors also found that people are more likely to include future generations in their moral circles when they are primed to think about them or when they are asked to consider the long-term consequences of their actions.

The authors' findings reconcile conflicting evidence in the extant literature on moral judgment and future-thinking. They also shed light on the role of temporal distance in moral circle expansion and offer practical implications for better valuing and safeguarding the shared future of humanity.

Overall, the research paper provides evidence that people generally tend to prioritize the present over the future when making moral judgments. However, the authors also identify individual factors and contextual conditions that can promote moral future-thinking. These findings could be used to develop interventions that encourage people to consider the long-term consequences of their actions and to take steps to protect the well-being of future generations.

Wednesday, November 8, 2023

Everything you need to know about artificial wombs

Cassandra Willyard
MIT Technology Review
Originally posted 29 SEPT 23

Here is an excerpt:

What is an artificial womb?

An artificial womb is an experimental medical device intended to provide a womblike environment for extremely premature infants. In most of the technologies, the infant would float in a clear “biobag,” surrounded by fluid. The idea is that preemies could spend a few weeks continuing to develop in this device after birth, so that “when they’re transitioned from the device, they’re more capable of surviving and having fewer complications with conventional treatment,” says George Mychaliska, a pediatric surgeon at the University of Michigan.

One of the main limiting factors for survival in extremely premature babies is lung development. Rather than breathing air, babies in an artificial womb would have their lungs filled with lab-made amniotic fluid, that mimics the amniotic fluid they would have hadjust like they would in utero. Neonatologists would insert tubes into blood vessels in the umbilical cord so that the infant’s blood could cycle through an artificial lung to pick up oxygen. 

The device closest to being ready to be tested in humans, called the EXTrauterine Environment for Newborn Development, or EXTEND, encases the baby in a container filled with lab-made amniotic fluid. It was invented by Alan Flake and Marcus Davey at the Children’s Hospital of Philadelphia and is being developed by Vitara Biomedical.


Here is my take:

Artificial wombs are experimental medical devices that aim to provide a womb-like environment for extremely premature infants. The technology is still in its early stages of development, but it has the potential to save the lives of many babies who would otherwise not survive.

Overall, artificial wombs are a promising new technology with the potential to revolutionize the care of premature infants. However, more research is needed to fully understand the risks and benefits of the technology before it can be widely used.

Here are some additional ethical concerns that have been raised about artificial wombs:
  • The potential for artificial wombs to be used to create designer babies or to prolong the lives of fetuses with severe disabilities.
  • The potential for artificial wombs to be used to exploit or traffick babies.
  • The potential for artificial wombs to exacerbate existing social and economic inequalities.
It is important to have a public conversation about these ethical concerns before artificial wombs become widely available. We need to develop clear guidelines for how the technology should be used and ensure that it is used in a way that benefits all of society.