Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Friday, February 28, 2025

Toward a theory of AI errors: making sense of hallucinations, catastrophic failures, and the fallacy of generative AI

Barassi, V. (2024).
Harvard Data Science Review, Special Issue 5. 

Abstract

The rise of generative AI confronts us with new and key questions about AI failure, and how we make sense of and learn how to coexist with it. While computer scientists understand AI failure as something that we can learn from and predict, in this article I argue that we need to understand AI failure as a complex social reality that is defined by the interconnection between our data, technological design, and structural inequalities by processes of commodification and by everyday political and social conflicts. Yet I also show that to make sense of the complexity of AI failure we need a theory of AI errors. Bringing philosophical approaches to error theory together with anthropological perspectives, I argue that a theory of error is essential because it sheds light on the fact that the failures in our systems derive from processes of erroneous knowledge production, from mischaracterizations and flawed cognitive relations. A theory of AI errors, therefore, ultimately confronts us with the question about what types of cognitive relations and judgments define our AI systems, and sheds light on their deep-seeded limitations when it comes to making sense of our social worlds and human life.

Here are some thoughts:

As generative AI technologies continue to advance, they bring remarkable capabilities alongside significant challenges. Veronica Barassi’s work delves into these challenges by critically examining AI "hallucinations" and errors. Her analysis emphasizes the need for a foundational theory to understand and address the societal implications of these phenomena.

AI systems often generate outputs that are factually incorrect or nonsensical, commonly referred to as "hallucinations." These failures arise from the design of AI models, which prioritize persuasive, realistic outputs over factual accuracy. Barassi contends that these errors are not merely isolated technical glitches but the product of erroneous knowledge production influenced by biases and flawed cognitive patterns inherent in the systems. Furthermore, describing these inaccuracies as hallucinations risks anthropomorphizing AI, attributing human-like cognitive capabilities to machines that operate solely on probabilistic algorithms. This misrepresentation can distort ethical evaluations and public understanding of AI's true capabilities and limitations.

One major concern highlighted is the structural homogenization of AI systems. Foundation models, which underpin many generative AI technologies, rely on vast datasets and self-supervised learning. While this approach enables scalability and adaptability, it also amplifies systemic flaws, making AI vulnerable to cascading failures. These failures, when integrated into critical systems such as infrastructure or healthcare, could have catastrophic societal consequences.

Barassi also underscores the social dimensions of AI errors, which often reflect deeper societal issues. Embedded biases in data and technology perpetuate structural inequalities, disproportionately impacting marginalized groups. She argues that AI failures cannot be seen as mere technical bugs but must be understood as sociotechnical constructs shaped by interactions between humans and machines. Addressing these challenges requires a robust theoretical framework combining philosophical error theories and anthropological insights. This perspective shifts the focus from technical fixes to understanding AI errors as reflections of our cultural, social, and epistemological landscapes.

Another critical issue is the cultural and linguistic bias inherent in many AI systems. Language models primarily reflect English-speaking, Western contexts, marginalizing the diversity of human experiences. This lack of inclusivity highlights the need for broader, more representative datasets and culturally sensitive approaches to AI development.

Barassi’s analysis calls for a paradigm shift in how we approach AI failures. Rather than viewing errors as problems to be eliminated, she advocates for accepting their inevitability and critically engaging with them. Policymakers, developers, and users must prioritize transparency, accountability, and inclusivity to mitigate AI's societal risks. By fostering interdisciplinary collaboration and embracing the complexity of AI systems, society can better navigate the challenges and opportunities presented by this rapidly evolving technology. This approach emphasizes coexistence with AI’s limitations while fostering ethical development and deployment practices.

Thursday, February 27, 2025

Asimov's Laws: A Blueprint for Ethical AI in Psychological Practice?

Gavazzi, J., & Knapp, S.
The Pennsylvania Psychologist
(2025). Advance of publication.

Abstract

This paper explores the application of Isaac Asimov’s Three Laws of Robotics as a framework for ethical AI integration in psychological practice. While Asimov’s laws were originally conceived for fictional robots, they offer valuable insights into ethical concerns surrounding AI in mental health services. The paper examines how these laws align with psychological ethics, particularly principles such as nonmaleficence, beneficence, autonomy, and fidelity. The First Law emphasizes preventing harm and ensuring patient well-being, addressing risks such as biased AI algorithms, privacy concerns, and over-reliance on automated decision-making. The Second Law highlights the necessity of maintaining human control over AI-assisted therapy and preventing professional deskilling. The Third Law underscores the importance of AI reliability, security, and transparency in clinical applications. While Asimov’s laws provide a foundational ethical lens, they are inherently limited compared to broader principle-based ethical frameworks, such as those proposed by Beauchamp and Childress. The paper concludes that while Asimov’s laws serve as a useful starting point, ethical AI integration in psychological practice requires continuous adaptation and oversight to ensure responsible implementation.

Wednesday, February 26, 2025

Why ethics is becoming AI's biggest challenge

Joe McKendrick
zdnet.com
Originally posted 27 Dec 24

Many organizations are either delaying or pulling the plug on generative AI due to concerns about its ethics and safety. This is prompting calls to move AI out of technology departments and involve more non-technical business stakeholders in AI design and management.

More than half (56%) of businesses are delaying major investments in generative AI until there is clarity on AI standards and regulations, according to a recent survey from the IBM Institute for Business Value. At least 72% say they are willing to forgo generative AI benefits due to ethical concerns.

More challenging than technology issues

Many of the technical issues associated with artificial intelligence have been resolved, but the hard work surrounding AI ethics is now coming to the forefront. This is proving even more challenging than addressing technology issues.

The challenge for development teams at this stage is "to recognize that creating ethical AI is not strictly a technical problem but a socio-technical problem," said Phaedra Boinodiris, global leader for trustworthy AI at IBM Consulting, in a recent podcast. This means extending AI oversight beyond IT and data management teams across organizations.


Here are some thoughts:

Many organizations are delaying or halting investments in generative AI due to ethical and safety concerns, prompting calls to involve non-technical stakeholders in AI design and management. A recent IBM survey found that 56% of businesses are postponing major AI investments until regulatory clarity emerges, with 72% willing to forgo AI benefits due to ethical worries. While technical challenges have largely been resolved, ethical concerns are proving more complex, requiring a socio-technical approach. Phaedra Boinodiris of IBM Consulting emphasizes that ethical AI development demands multidisciplinary teams, including experts in linguistics, philosophy, and diverse life experiences, to address questions like unintended effects and data appropriateness.

Business leaders increasingly see AI ethics as a competitive advantage, with 75% viewing it as a differentiator and 54% considering it strategically vital. Consumers and employees also value ethical AI. An effective AI ethics framework can yield three types of ROI: economic (e.g., cost savings), capabilities (e.g., long-term innovation), and reputational (e.g., improved brand perception). However, many executives lack awareness of these impacts, highlighting the need for ongoing education to align AI ethics with broader organizational goals.

Tuesday, February 25, 2025

Making progress in reducing veteran suicide rates

Wes Martin
Stars and Stripes
Originally posted 23 Jan 25

According to the Department of Veterans Affairs and Centers for Disease Control and Prevention, the suicide rate among veterans is nearly 60% higher than the general population. It is one of the leading causes of deaths among veterans under the age of 45. Post-traumatic stress disorder being left untreated or mistreated adds to the problem. PTSD leads to overwhelming feelings of hopelessness, emotional numbness and isolation. These are directly linked to suicidal ideation. Flashbacks, nightmares, hypervigilance, anger and avoidance behaviors severely disrupt daily functioning, further exacerbating depression and making recovery feel impossible. Self-medicating with substances like alcohol or drugs — common for those suffering — can further compound the issue.

A broad stroke of traditional medication and talk therapy is not enough to combat the complexities involved in this crisis. Often when addressing PTSD and other mental health related treatments, heavy pharmaceuticals will be applied. This method is rife with dangerous drug side-effects coinciding with the risk of reliance and addiction to a drug not specifically adept at correcting the misfiring brain chemistry. It can be a “wet-blanket” effect, leaving patients feeling empty or zombie-like while simply going through the motions of life.

Emerging treatments such as psychedelics are being explored with comprehensive medical evaluation as long-term recovery options. Ibogaine is one example. Derived from the African plant Tabernanthe iboga, ibogaine has been studied for its potential to alleviate symptoms of PTSD, depression and anxiety. Recent research indicates that ibogaine can effectively reduce those symptoms in veterans with traumatic brain injuries.


Here are some thoughts:

The suicide rate among veterans is nearly 60% higher than the general population, with PTSD being a major contributor. Untreated PTSD leads to hopelessness, emotional numbness, and isolation, often exacerbated by self-medication with substances. Traditional treatments like pharmaceuticals and talk therapy are often insufficient, risking side effects and addiction without addressing root causes. Emerging treatments, such as ibogaine—a psychedelic derived from the African plant Tabernanthe iboga—show promise. Ibogaine promotes neuroplasticity and may "reset" brain pathways damaged by trauma, potentially reversing PTSD-related changes. Companies like mPath Therapeutics Corp. are developing safe, regulated ibogaine treatments. Prioritizing innovative, evidence-based therapies like ibogaine could significantly reduce PTSD, depression, and suicide rates among veterans, offering hope for long-term recovery.

Monday, February 24, 2025

Scaffolding Bad Moral Agents

Jefferson, A., Heinrichs, J., & Sifferd, K. (2024).
Topoi.

Abstract

Recent work on ecological accounts of moral responsibility and agency have argued for the importance of social environments for moral reasons responsiveness. Moral audiences can scaffold individual agents’ sensitivity to moral reasons and their motivation to act on them, but they can also undermine it. In this paper, we look at two case studies of ‘scaffolding bad’, where moral agency is undermined by social environments: street gangs and online incel communities. In discussing these case studies, we draw both on recent situated cognition literature and on scaffolded responsibility theory. We show that the way individuals are embedded into a specific social environment changes the moral considerations they are sensitive to in systematic ways because of the way these environments scaffold affective and cognitive processes, specifically those that concern the perception and treatment of ingroups and outgroups. We argue that gangs undermine reasons responsiveness to a greater extent than incel communities because gang members are more thoroughly immersed in the gang environment.

Here are some thoughts:

The paper explores the concept of "scaffolding bad" in moral agency, focusing on how social environments can negatively influence an individual's moral reasoning and behavior. The authors examine two case studies: street gangs and online incel communities, to demonstrate how certain social environments can undermine moral reasons responsiveness.

The research draws on situated cognition literature and scaffolded responsibility theory to explain how social environments shape cognitive and affective processes. The concept of scaffolding is central to the argument, describing how organisms use their environment to perform functions they couldn't do alone. However, the authors emphasize that scaffolding isn't always beneficial and can lead to what they term "hostile scaffolding" - environmental structures that shape cognition and emotion against an individual's interests.

Key to the paper's argument is the idea that moral reasons responsiveness depends heavily on social environments. These environments can either support or corrupt moral cognition by influencing how individuals perceive and treat ingroups and outgroups. The authors argue that gangs are particularly problematic, as they more thoroughly immerse members in a harmful environment and are more difficult to leave compared to online incel communities.

The research highlights how social feedback plays a crucial role in developing moral agency. This feedback not only provides information about right and wrong but also motivates individuals to be sensitive to moral and social norms. Over time, this external feedback becomes internalized, shaping an individual's moral reasoning even in the absence of direct social oversight.

Ultimately, the paper demonstrates the complex ways in which social environments can scaffold moral agency, sometimes reinforcing harmful values and limiting empathy towards others. By comparing street gangs and online incel communities, the authors provide insight into how different social contexts can systematically alter an individual's moral sensitivity and reasons responsiveness.

Sunday, February 23, 2025

Telehealth Brief Cognitive Behavioral Therapy for Suicide Prevention: A Randomized Clinical Trial

Baker, J. C., et al. (2024).
JAMA Network Open, 7(11), e2445913.

Abstract

Importance  Suicide rates continue to increase in the US. Evidence-based treatments for suicide risk exist, but their effectiveness when delivered via telehealth remains unknown.

Objective  To test the efficacy of brief cognitive behavioral therapy (BCBT) for reducing suicide attempts and suicidal ideation among high-risk adults when delivered via telehealth.

Design, Setting, and Participants  This 2-group parallel randomized clinical trial comparing BCBT with present-centered therapy (PCT) was conducted from April 2021 to September 2023 with 1-year follow-up at an outpatient psychiatry and behavioral health clinic located in the midwestern US. Participants reporting suicidal ideation during the past week and/or suicidal behavior during the past month were recruited from clinic waiting lists, inpatient service, intermediate care, research match, and direct referrals from clinicians. A total of 768 participants were invited to participate, 112 were assessed for eligibility, and 98 were eligible and randomly assigned to a treatment condition. Data analysis was from April to September 2024.

Interventions  Participants received either BCBT, an evidence-based suicide-focused treatment that teaches emotion regulation and reappraisal skills, or PCT, a goal-oriented treatment that helps participants identify adaptive responses to stressors. Participants were randomized using a computerized stratified randomization algorithm with 2 strata (sex and history of suicide attempts).

Conclusions and Relevance  The findings of this randomized clinical trial suggest that BCBT delivered via video telehealth is effective for reducing suicide attempts among adults with recent suicidal thoughts and/or behaviors.


Here are some thoughts:

The study investigated the effectiveness of brief cognitive behavioral therapy (BCBT) delivered via telehealth for suicide prevention. Conducted from April 2021 to September 2023, the randomized clinical trial involved 96 adults with recent suicidal ideation or behaviors, comparing BCBT with present-centered therapy (PCT).

The research addressed a critical public health concern, noting that suicide rates in the US have increased by over 33% since 2000, with 49,449 suicides recorded in 2022. The study aimed to test whether BCBT could be effectively delivered through telehealth, a method that became increasingly prevalent during the COVID-19 pandemic.

Key findings revealed that participants receiving BCBT experienced significantly fewer suicide attempts compared to those in the PCT group. Specifically, participants in the BCBT group made 0.70 attempts per participant, while PCT participants averaged 1.40 attempts, representing a 41% reduced risk for suicide attempts. Both treatment groups showed significant reductions in suicidal ideation severity, with no statistically significant difference between them.

The study's design included 12 weekly individual sessions delivered remotely, with participants randomized across two strata: biological sex and history of suicide attempts. BCBT focused on teaching emotion regulation and cognitive reappraisal skills, while PCT provided a more supportive, less structured approach to addressing life stressors.

These findings are particularly significant as they demonstrate the potential of telehealth in delivering evidence-based suicide prevention interventions, potentially improving access to critical mental health services for high-risk individuals.

Saturday, February 22, 2025

Preliminaries to artificial consciousness: a multidimensional heuristic approach.

Evers, K., et al. (2025).
arXiv.org.

Abstract

The pursuit of artificial consciousness requires conceptual clarity to navigate its theoretical and empirical challenges. This paper introduces a composite, multilevel, and multidimensional model of consciousness as a heuristic framework to guide research in this field. Consciousness is treated as a complex phenomenon, with distinct constituents and dimensions that can be operationalized for study and for evaluating their replication. We argue that this model provides a balanced approach to artificial consciousness research by avoiding binary thinking (e.g., conscious vs. non-conscious) and offering a structured basis for testable hypotheses. To illustrate its utility, we focus on "awareness" as a case study,
demonstrating how specific dimensions of consciousness can be pragmatically analyzed and targeted for potential artificial instantiation. By breaking down the conceptual intricacies of consciousness and aligning them with practical research goals, this paper lays the groundwork for a robust strategy to advance the scientific and technical understanding of artificial consciousness.


Here are some thoughts:

The paper introduces a comprehensive approach to understanding artificial consciousness by proposing a composite, multilevel, and multidimensional model that aims to provide conceptual clarity in this complex field. The authors argue that the pursuit of artificial consciousness requires a nuanced framework that moves beyond simplistic binary thinking of "conscious versus non-conscious" systems.

The research emphasizes the importance of analytical clarity and logical coherence when exploring artificial consciousness. They highlight a critical challenge in the field - the "analytical fallacy" - which occurs when researchers inappropriately derive empirical findings directly from theoretical premises without sufficient independent validation. This approach can lead to circular reasoning and potentially misleading conclusions about consciousness.

A key contribution of the paper is its treatment of consciousness as a complex phenomenon with distinct constituents and dimensions that can be systematically studied and potentially replicated. By focusing on "awareness" as a case study, the authors demonstrate how specific dimensions of consciousness can be pragmatically analyzed and targeted for potential artificial instantiation1
.
The model proposed seeks to address the multifaceted nature of consciousness, acknowledging the field's current pre-scientific state and the need for a balanced, empirically informed approach. It aims to provide researchers with a structured framework for developing testable hypotheses about artificial consciousness, ultimately advancing both scientific understanding and technical exploration of this profound concept.

Friday, February 21, 2025

Evaluating trends in private equity ownership and impacts on health outcomes, costs, and quality: systematic review

Borsa, A., Bejarano, G., Ellen, M., & Bruch, J. D. 

Abstract

Objective
To review the evidence on trends and impacts of private equity (PE) ownership of healthcare operators.

Data synthesis 
Studies were classified as finding either beneficial, harmful, mixed, or neutral impacts of PE ownership on main outcome measures. Results across studies were narratively synthesized and reported. Risk of bias was evaluated using ROBINS-I (Risk Of Bias In Non-randomised Studies of Interventions).

Results
The electronic search identified 1778 studies, with 55 meeting the inclusion criteria. Studies spanned eight countries, with most (n=47) analyzing PE ownership of healthcare operators in the US. Nursing homes were the most commonly studied healthcare setting (n=17), followed by hospitals and dermatology settings (n=9 each); ophthalmology (n=7); multiple specialties or general physician groups (n=5); urology (n=4); gastroenterology and orthopedics (n=3 each); surgical centers, fertility, and obstetrics and gynecology (n=2 each); and anesthesia, hospice care, oral or maxillofacial surgery, otolaryngology, and plastics (n=1 each). Across the outcome measures, PE ownership was most consistently associated with increases in costs to patients or payers. Additionally, PE ownership was associated with mixed to harmful impacts on quality. These outcomes held in sensitivity analyses in which only studies with moderate risk of bias were included. Health outcomes showed both beneficial and harmful results, as did costs to operators, but the volume of studies for these outcomes was too low for conclusive interpretation. In some instances, PE ownership was associated with reduced nurse staffing levels or a shift towards lower nursing skill mix. No consistently beneficial impacts of PE ownership were identified.

Conclusions
Trends in PE ownership rapidly increased across almost all healthcare settings studied. Such ownership is often associated with harmful impacts on costs to patients or payers and mixed to harmful impacts on quality. Owing to risk of bias and frequent geographic focus on the US, conclusions might not be generalizable internationally.

Here are some thoughts:

This systematic review examines the increasing trends and impacts of private equity (PE) ownership in healthcare across eight countries, primarily focusing on the US. Analyzing 55 empirical studies, the review assessed PE's influence on health outcomes, costs to patients/payers and operators, and quality of care in settings like nursing homes, hospitals, and dermatology practices. The findings reveal a rapid increase in PE ownership across various healthcare settings, with PE ownership most consistently associated with increased costs to patients or payers and mixed to harmful impacts on quality. While health outcomes and operator costs showed mixed results due to a limited number of studies, some instances linked PE ownership to reduced nurse staffing levels or a shift toward lower nursing skill mix. The review identified no consistently beneficial impacts of PE ownership, leading the authors to conclude that such ownership is often associated with harmful impacts on costs and mixed to harmful impacts on quality. However, they caution that these conclusions might not be generalizable internationally due to the risk of bias in the included studies and the geographic focus on the US, highlighting the need for increased attention and possibly increased regulation.

Thursday, February 20, 2025

Enhancing competencies for the ethical integration of religion and spirituality in psychological services

Currier, J. M. et al. (2023).
Psychological Services, 20(1), 40–50.

Abstract

Advancement of Spiritual and religious competencies aligns with increasing attention to the pivotal role of multiculturalism and intersectionality, as well as shifts in organizational values and strategies, that shape the delivery of psychological services (e.g., evidence-based practice). A growing evidence base also attests to ethical integration of peoples’ religious faith and/or spirituality (R/S) in their mental care as enhancing the utilization and efficacy of psychological services. When considering the essential attitudes, knowledge, and skills for addressing religious and spiritual aspects of clients’ lives, lack of R/S competencies among psychologists and other mental health professionals impedes ethical and effective practice. The purpose of this article is to discuss the following: (a) skills for negotiating ethical challenges with spiritually integrated care; and (b) strategies for assessing a client’s R/S. We also describe systemic barriers to ethical integration of R/S in mental health professions and briefly introduce our Spiritual and Religious Competencies project. Looking ahead, a strategic, interdisciplinary, and comprehensive approach is needed to transform the practice of mental health care in a manner that more fully aligns with the values, principles, and expectations across our disciplines’ professional ethical codes and accreditation standards. We propose that explicit training across mental health professions is necessary to more fully honor R/S diversity and the importance of this layer of identity and intersectionality in many peoples’ lives.

Impact Statement

Psychologists and other mental health professionals often lack necessary awareness, knowledge, and skills to address their clients’ religious faith and/or spirituality (R/S). This article explores ethical considerations regarding Spiritual and Religious Competencies in training and clinical practice, approaches to R/S assessment, as well as barriers and solutions to ethical integration of R/S in psychological services.

Wednesday, February 19, 2025

The Moral Psychology of Artificial Intelligence

Bonnefon, J., Rahwan, I., & Shariff, A. (2023).
Annual Review of Psychology, 75(1), 653–675.

Abstract

Moral psychology was shaped around three categories of agents and patients: humans, other animals, and supernatural beings. Rapid progress in artificial intelligence has introduced a fourth category for our moral psychology to deal with: intelligent machines. Machines can perform as moral agents, making decisions that affect the outcomes of human patients or solving moral dilemmas without human supervision. Machines can be perceived as moral patients, whose outcomes can be affected by human decisions, with important consequences for human–machine cooperation. Machines can be moral proxies that human agents and patients send as their delegates to moral interactions or use as a disguise in these interactions. Here we review the experimental literature on machines as moral agents, moral patients, and moral proxies, with a focus on recent findings and the open questions that they suggest.

Here are some thoughts:

This article delves into the evolving moral landscape shaped by artificial intelligence (AI). As AI technology progresses rapidly, it introduces a new category for moral consideration: intelligent machines.

Machines as moral agents are capable of making decisions that have significant moral implications. This includes scenarios where AI systems can inadvertently cause harm through errors, such as misdiagnosing a medical condition or misclassifying individuals in security contexts. The authors highlight that societal expectations for these machines are often unrealistically high; people tend to require AI systems to outperform human capabilities significantly while simultaneously overestimating human error rates. This disparity raises critical questions about how many mistakes are acceptable from machines in life-and-death situations and how these errors are distributed among different demographic groups.

In their role as moral patients, machines become subjects of human moral behavior. This perspective invites exploration into how humans interact with AI—whether cooperatively or competitively—and the potential biases that may arise in these interactions. For instance, there is a growing concern about algorithmic bias, where certain demographic groups may be unfairly treated by AI systems due to flawed programming or data sets.

Lastly, machines serve as moral proxies, acting as intermediaries in human interactions. This role allows individuals to delegate moral decision-making to machines or use them to mask unethical behavior. The implications of this are profound, as it raises ethical questions about accountability and the extent to which humans can offload their moral responsibilities onto AI.

Overall, the article underscores the urgent need for a deeper understanding of the psychological dimensions associated with AI's integration into society. As encounters between humans and intelligent machines become commonplace, addressing issues of trust, bias, and ethical alignment will be crucial in shaping a future where AI can be safely and effectively integrated into daily life.

Tuesday, February 18, 2025

Pulling Out the Rug on Informed Consent — New Legal Threats to Clinicians and Patients

Underhill, K., & Nelson, K. M. (2025).
New England Journal of Medicine.

In recent years, state legislators in large portions of the United States have devised and enacted new legal strategies to limit access to health care for transgender people.1 To date, 26 states have enacted outright bans on gender-affirming care, which thus far apply only to minors. Other state laws create financial or procedural obstacles to this type of care, such as bans on insurance coverage, requirements to obtain opinions from multiple clinicians, or consent protocols that are stricter than thosefor other health care.

These laws target clinicians who provide gender-affirming care, but all clinicians — in every jurisdiction and specialty — should take note of the intrusive legal actions that are emerging in the regulation of health care for transgender people. Like the development of restrictive abortion laws, new legal tactics for attacking gender-affirming care are likely to guide legislative opposition to other politically contested
medical interventions. Here we consider one particular legal strategy that, if more widely adopted, could
challenge the legal infrastructure underlying U.S. health care.

The article is paywalled. :(

The author was kind and sent a copy to me.

Here are some thoughts.

The article discusses the increasing legal strategies employed by state legislators to restrict access to healthcare for transgender people, particularly minors. It focuses on a new legal technique in Utah that allows patients who received "hormonal transgender treatment" or surgery on "sex characteristics" as minors to retroactively revoke their consent until the age of 25, potentially exposing clinicians to legal claims. This law challenges the core of the clinician-patient relationship and the legal infrastructure of U.S. healthcare by undermining the principle of informed consent.

The authors argue that Utah's law places an unreasonable burden on clinicians, extending beyond gender-affirming care and potentially deterring them from providing necessary medical services to minors. They express concern that this legal strategy could spread to other states and be applied to other politically contested medical interventions, such as contraception or vaccination. The authors conclude that allowing patients to withdraw consent retroactively threatens the foundation of the U.S. health care system, as it undermines clinicians' ability to rely on informed consent at the time of care and could destabilize access to various healthcare services.

Monday, February 17, 2025

Probabilistic Consensus through Ensemble Validation: A Framework for LLM Reliability

Naik, N. (2024).
arXiv (Cornell University). 

Large Language Models (LLMs) have shown significant advances in text generation but often lack the reliability needed for autonomous deployment in high-stakes domains like healthcare, law, and finance. Existing approaches rely on external knowledge or human oversight, limiting scalability. We introduce a novel framework that repurposes ensemble methods for content validation through model consensus. In tests across 78 complex cases requiring factual accuracy and causal consistency, our framework improved precision from 73.1% to 93.9% with two models (95% CI: 83.5%-97.9%) and to 95.6% with three models (95% CI: 85.2%-98.8%). Statistical analysis indicates strong inter-model agreement (κ > 0.76) while preserving sufficient independence to catch errors through disagreement. We outline a clear pathway to further enhance precision with additional validators and refinements. Although the current approach is constrained by multiple-choice format requirements and processing latency, it offers immediate value for enabling reliable autonomous AI systems in critical applications.

Here are some thoughts.

The article presents a novel framework aimed at enhancing the reliability of Large Language Models (LLMs) through ensemble validation, addressing a critical challenge in deploying AI systems in high-stakes domains like healthcare, law, and finance. LLMs have demonstrated remarkable capabilities in text generation; however, their probabilistic nature often leads to inaccuracies that can have serious consequences when applied autonomously. The authors highlight that existing solutions either depend on external knowledge or require extensive human oversight, which limits scalability and efficiency.

In their research, they tested the framework across 78 complex cases requiring factual accuracy and causal consistency. The results showed a significant improvement in precision, increasing from 73.1% to 93.9% with two models and achieving 95.6% with three models. This improvement was attributed to the use of model consensus; by requiring agreement among multiple independent models, the approach narrows down the range of possible outcomes to those most likely to be correct. The statistical analysis indicated strong inter-model agreement while maintaining enough independence to identify errors through disagreement.

The implications of this research are particularly important for psychologists and professionals in related fields. As AI systems become more integrated into clinical practice and research, ensuring their reliability is paramount for making informed decisions in mental health diagnosis and treatment planning. The framework's ability to enhance accuracy without relying on external knowledge bases or human intervention could facilitate the development of decision support tools that psychologists can trust. Additionally, understanding how ensemble methods can improve AI reliability may offer insights into cognitive biases and collective decision-making processes relevant to psychological research.

Sunday, February 16, 2025

Humor as a window into generative AI bias

Saumure, R., De Freitas, J., & Puntoni, S. (2025).
Scientific Reports, 15(1).

Abstract

A preregistered audit of 600 images by generative AI across 150 different prompts explores the link between humor and discrimination in consumer-facing AI solutions. When ChatGPT updates images to make them “funnier”, the prevalence of stereotyped groups changes. While stereotyped groups for politically sensitive traits (i.e., race and gender) are less likely to be represented after making an image funnier, stereotyped groups for less politically sensitive traits (i.e., older, visually impaired, and people with high body weight groups) are more likely to be represented.

Here are some thoughts:

Here is a novel method developed to uncover biases in AI systems, revealing some unexpected results. The research highlights how AI models, despite their advanced capabilities, can exhibit biases that are not immediately apparent. The new approach involves probing the AI's decision-making processes to identify hidden prejudices, which can have significant implications for fairness and ethical AI deployment.

This research underscores a critical challenge in the field of artificial intelligence: ensuring that AI systems operate ethically and fairly. As AI becomes increasingly integrated into industries such as healthcare, finance, criminal justice, and hiring, the potential for biased decision-making poses significant risks. Biases in AI can perpetuate existing inequalities, reinforce stereotypes, and lead to unfair outcomes for individuals or groups. This study highlights the importance of prioritizing ethical AI development to build systems that are not only intelligent but also just and equitable.

To address these challenges, bias detection should become a standard practice in AI development workflows. The novel method introduced in this research provides a promising framework for identifying hidden biases, but it is only one piece of the puzzle. Organizations should integrate multiple bias detection techniques, encourage interdisciplinary collaboration, and leverage external audits to ensure their AI systems are as fair and transparent as possible.

Saturday, February 15, 2025

Does One Emotion Rule All Our Ethical Judgments

Elizabeth Kolbert
The New Yorker
Originally published 13 Jan 25

Here is an excerpt:

Gray describes himself as a moral psychologist. In contrast to moral philosophers, who search for abstract principles of right and wrong, moral psychologists are interested in the empirical matter of people’s perceptions. Gray writes, “We put aside questions of how we should make moral judgments to examine how people do make more moral judgments.”

For the past couple of decades, moral psychology has been dominated by what’s known as moral-foundations theory, or M.F.T. According to M.F.T., people reach ethical decisions on the basis of mental structures, or “modules,” that evolution has wired into our brains. These modules—there are at least five of them—involve feelings like empathy for the vulnerable, resentment of cheaters, respect for authority, regard for sanctity, and anger at betrayal. The reason people often arrive at different judgments is that their modules have developed differently, either for individual or for cultural reasons. Liberals have come to rely almost exclusively on their fairness and empathy modules, allowing the others to atrophy. Conservatives, by contrast, tend to keep all their modules up and running.

If you find this theory implausible, you’re not alone. It has been criticized on a wide range of grounds, including that it is unsupported by neuroscience. Gray, for his part, wants to sweep aside moral-foundations theory, plural, and replace it with moral-foundation theory, singular. Our ethical judgments, he suggests, are governed not by a complex of modules but by one overriding emotion. Untold generations of cowering have written fear into our genes, rendering us hypersensitive to threats of harm.

“If you want to know what someone sees as wrong, your best bet is to figure out what they see as harmful,” Gray writes at one point. At another point: “All people share a harm-based moral mind.” At still another: “Harm is the master key of morality.”

If people all have the same ethical equipment, why are ethical questions so divisive? Gray’s answer is that different people fear differently. “Moral disagreements can still arise even if we all share a harm-based moral mind, because liberals and conservatives disagree about who is especially vulnerable to victimization,” he writes.


Here are some thoughts:

Notably, I am a big fan of Kurt Gray and his research. Search this site for multiple articles.

Our moral psychology is deeply rooted in our evolutionary past, particularly in our sensitivity to harm, which was crucial for survival. This legacy continues to influence modern moral and political debates, often leading to polarized views based on differing perceptions of harm. Kurt Gray’s argument that harm is the "master key" of morality simplifies the complex nature of moral judgments, offering a unifying framework while potentially overlooking the nuanced ways in which cultural and individual differences shape moral reasoning. His critique of moral-foundations theory (M.F.T.) challenges the idea that moral judgments are based on multiple innate modules, suggesting instead that a singular focus on harm underpins our moral (and sometime ethical) decisions. This perspective highlights how moral disagreements, such as those over abortion or immigration, arise from differing assumptions about who is vulnerable to harm.

The idea that moral judgments are often intuitive rather than rational further complicates our understanding of moral decision-making. Gray’s examples, such as incestuous siblings or a vegetarian eating human flesh, illustrate how people instinctively perceive harm even when none is evident. This challenges the notion that moral reasoning is based on logical deliberation, emphasizing instead the role of emotion and intuition. Gray’s emphasis on harm-based storytelling as a tool for bridging moral divides underscores the power of narrative in shaping perceptions. However, it also raises concerns about the potential for manipulation, as seen in the use of exaggerated or false narratives in political rhetoric, such as Donald Trump’s fabricated tales of harm.

Ultimately, the article raises important questions about whether our evolved moral psychology is adequate for addressing the complex challenges of the modern world, such as climate change, nuclear weapons, and artificial intelligence. The mismatch between our ancient instincts and contemporary problems may be a significant source of societal tension. Gray’s work invites reflection on how we can better understand and address the roots of moral conflict, while cautioning against the potential pitfalls of relying too heavily on intuitive judgments and emotional narratives. It suggests that while storytelling can foster empathy and bridge divides, it must be used responsibly to avoid exacerbating polarization and misinformation.

Friday, February 14, 2025

High-reward, high-risk technologies? An ethical and legal account of AI development in healthcare

Corfmat, M., Martineau, J. T., & Régis, C. (2025).
BMC Med Ethics 26, 4
https://doi.org/10.1186/s12910-024-01158-1

Abstract

Background
Considering the disruptive potential of AI technology, its current and future impact in healthcare, as well as healthcare professionals’ lack of training in how to use it, the paper summarizes how to approach the challenges of AI from an ethical and legal perspective. It concludes with suggestions for improvements to help healthcare professionals better navigate the AI wave.

Methods
We analyzed the literature that specifically discusses ethics and law related to the development and implementation of AI in healthcare as well as relevant normative documents that pertain to both ethical and legal issues. After such analysis, we created categories regrouping the most frequently cited and discussed ethical and legal issues. We then proposed a breakdown within such categories that emphasizes the different - yet often interconnecting - ways in which ethics and law are approached for each category of issues. Finally, we identified several key ideas for healthcare professionals and organizations to better integrate ethics and law into their practices.

Results
We identified six categories of issues related to AI development and implementation in healthcare: (1) privacy; (2) individual autonomy; (3) bias; (4) responsibility and liability; (5) evaluation and oversight; and (6) work, professions and the job market. While each one raises different questions depending on perspective, we propose three main legal and ethical priorities: education and training of healthcare professionals, offering support and guidance throughout the use of AI systems, and integrating the necessary ethical and legal reflection at the heart of the AI tools themselves.

Conclusions
By highlighting the main ethical and legal issues involved in the development and implementation of AI technologies in healthcare, we illustrate their profound effects on professionals as well as their relationship with patients and other organizations in the healthcare sector. We must be able to identify AI technologies in medical practices and distinguish them by their nature so we can better react and respond to them. Healthcare professionals need to work closely with ethicists and lawyers involved in the healthcare system, or the development of reliable and trusted AI will be jeopardized.


Here are some thoughts:

This article explores the ethical and legal challenges surrounding artificial intelligence (AI) in healthcare. The authors identify six critical categories of issues: privacy, individual autonomy, bias, responsibility and liability, evaluation and oversight, as well as work and professional impacts.

The research highlights that AI is fundamentally different from previous medical technologies due to its disruptive potential and ability to perform autonomous learning and decision-making. While AI promises significant improvements in areas like biomedical research, precision medicine, and healthcare efficiency, there remains a significant gap between AI system development and practical implementation in healthcare settings.

The authors emphasize that healthcare professionals often lack comprehensive knowledge about AI technologies and their implications. They argue that understanding the nuanced differences between legal and ethical frameworks is crucial for responsible AI integration. Legal rules represent minimal mandatory requirements, while ethical considerations encourage deeper reflection on appropriate behaviors and choices.

The paper suggests three primary priorities for addressing AI's ethical and legal challenges: (1) educating and training healthcare professionals, (2) providing robust support and guidance during AI system use, and (3) integrating ethical and legal considerations directly into AI tool development. Ultimately, the researchers stress the importance of close collaboration between healthcare professionals, ethicists, and legal experts to develop reliable and trustworthy AI technologies.

Thursday, February 13, 2025

New Proposed Health Cybersecurity Rule: What Physicians Should Know

Alicia Ault
MedScape.com
Originally posted 10 Jan 25

A new federal rule could force hospitals and doctors’ groups to boost health cybersecurity measures to better protect patients’ health information and prevent ransomware attacks. Some of the proposed requirements could be expensive for healthcare providers.

The proposed rule, issued by the US Department of Health and Human Services (HHS) and published on January 6 in the Federal Register, marks the first time in a decade that the federal government has updated regulations governing the security of private health information (PHI) that’s kept or shared online. Comments on the rule are due on March 6.

Because the risks for cyberattacks have increased exponentially, “there is a greater need to invest than ever before in both people and technologies to secure patient information,” Adam Greene, an attorney at Davis Wright Tremaine in Washington, DC, who advises healthcare clients on cybersecurity, told Medscape Medical News.

Bad actors continue to evolve and are often far ahead of their targets, added Mark Fox, privacy and research compliance officer for the American College of Cardiology.

In the proposed rule, HHS noted that breaches have risen by more than 50% since 2020. Damages from health data breaches are more expensive than in any other sector, averaging $10 million per incident, said HHS.


Here are some thoughts:

The article outlines a newly proposed cybersecurity rule aimed at strengthening the protection of healthcare data and systems. This rule is particularly relevant to physicians and healthcare organizations, as it addresses the growing threat of cyberattacks in the healthcare sector. The proposed regulation emphasizes the need for enhanced cybersecurity measures, such as implementing stronger protocols, conducting regular risk assessments, and ensuring compliance with updated standards. For physicians, this means adapting to new requirements that may require additional resources, training, and investment in cybersecurity infrastructure. The rule also highlights the critical importance of safeguarding patient information, as breaches can lead to severe consequences, including identity theft, financial loss, and compromised patient care. Beyond data protection, the rule aims to prevent disruptions to healthcare operations, such as delayed treatments or system shutdowns, which can arise from cyber incidents.

However, while the rule is a necessary step to address vulnerabilities, it may pose challenges for smaller practices or resource-limited healthcare organizations. Compliance could require significant financial and operational adjustments, potentially creating a burden for some providers. Despite these challenges, the proposed rule reflects a broader trend toward stricter cybersecurity regulations across industries, particularly in sectors like healthcare that handle highly sensitive information. It underscores the need for proactive measures to address evolving cyber threats and ensure the long-term security and reliability of healthcare systems. Collaboration between healthcare organizations, cybersecurity experts, and regulatory bodies will be essential to successfully implement these measures and share best practices. Ultimately, while the transition may be demanding, the long-term benefits—such as reduced risk of data breaches, enhanced patient trust, and uninterrupted healthcare services—are likely to outweigh the initial costs.

Wednesday, February 12, 2025

AI might start selling your choices before you make them, study warns

Monique Merrill
CourthouseNews.com
Originally posted 29 Dec 24

AI ethicists are cautioning that the rise of artificial intelligence may bring with it the commodification of even one's motivations.

Researchers from the University of Cambridge’s Leverhulme Center for the Future of Intelligence say — in a paper published Monday in the Harvard Data Science Review journal — the rise of generative AI, such as chatbots and virtual assistants, comes with the increasing opportunity for persuasive technologies to gain a strong foothold.

“Tremendous resources are being expended to position AI assistants in every area of life, which should raise the question of whose interests and purposes these so-called assistants are designed to serve”, Yaqub Chaudhary, a visiting scholar at the Center for Future of Intelligence, said in a statement.

When interacting even causally with AI chatbots — which can range from digital tutors to assistants to even romantic partners — users share intimate information that gives the technology access to personal "intentions" like psychological and behavioral data, the researcher said.

“What people say when conversing, how they say it, and the type of inferences that can be made in real-time as a result, are far more intimate than just records of online interactions,” Chaudhary added.

In fact, AI is already subtly manipulating and influencing motivations by mimicking the way a user talks or anticipating the way they are likely to respond, the authors argue.

Those conversations, as innocuous as they may seem, leave the door open for the technology to forecast and influence decisions before they are made.


Here are some thoughts:

Merrill discusses a study warning about the potential for artificial intelligence (AI) to predict and commodify human decisions before they are even made. The study raises significant ethical concerns about the extent to which AI can intrude into personal decision-making processes, potentially influencing or even selling predictions about our choices. AI systems are becoming increasingly capable of analyzing data patterns to forecast human behavior, which could lead to scenarios where companies use this technology to anticipate and manipulate consumer decisions before they are consciously made. This capability not only challenges the notion of free will but also opens the door to the exploitation of individuals' motivations and preferences for commercial gain.

AI ethicists are particularly concerned about the commodification of human motivations and decisions, which raises critical questions about privacy, autonomy, and the ethical use of AI in marketing and other industries. The ability of AI to predict and potentially manipulate decisions could lead to a future where individuals' choices are no longer entirely their own but are instead influenced or even predetermined by algorithms. This shift could undermine personal autonomy and create a society where decision-making is driven by corporate interests rather than individual agency.

The study underscores the urgent need for regulatory frameworks to ensure that AI technologies are used responsibly and that individuals' rights to privacy and autonomous decision-making are protected. It calls for proactive measures to address the potential misuse of AI in predicting and influencing human behavior, including the development of new laws or guidelines that limit how AI can be applied in marketing and other decision-influencing contexts. Overall, the study serves as a cautionary note about the rapid advancement of AI technologies and the importance of safeguarding ethical principles in their development and deployment. It highlights the risks of AI-driven decision commodification and emphasizes the need to prioritize individual autonomy and privacy in the digital age.

Tuesday, February 11, 2025

Facing death differently: revolutionising our approach to death and grief

Selman, L. (2024). 
BMJ, q2815.

Here is an excerpt:

End-of-life care hasn’t just been medicalised, it has been deprioritised. Healthcare systems and education focus on cures and life extension, sometimes at the expense of quality of life and compassionate care for dying people.

In the UK, about 90% of dying people would benefit from palliative care, but 25% don’t get it. Demand is set to rise 25% over the next 25 years as lifespans increase and health conditions grow more complex, yet the sector is already critically underfunded and overstretched. Just a third of UK hospice funding comes from the state, with the remaining £1bn raised annually through charity shops, fundraising events, and donations. This funding gap sends a clear message: care for dying people is less valued than aggressive treatments and high-tech medical advances. (It’s surely no coincidence that 9 in 10 of the clinical and care workforce in UK hospices are women, reflecting a long history of “women’s work” being undervalued.)

This patchwork funding model leaves rural and other underserved communities with glaring gaps in care, particularly for children. As demand for palliative care rises, the case for proper government funding for end-of-life care provision in care homes and the community, including hospices, grows ever more urgent.

In the meantime, stark inequities exist in access to hospice, palliative, and bereavement services. Marginalised communities face the greatest number of hurdles in accessing support at a time when compassion is most needed. Ethnic minority groups, in particular, encounter language barriers, inadequate outreach, and a shortage of culturally competent providers. Thirty per cent of people from ethnic minority groups but just 17% of white people say they don’t trust healthcare professionals to provide high-quality end-of-life care.


Here are some thoughts:

Selman highlights the significant challenges and ethical concerns surrounding end-of-life care in the UK. Despite 90% of dying people benefiting from palliative care, 25% do not receive it, and demand is expected to rise by 25% over the next 25 years due to increasing lifespans and complex health conditions. However, the sector remains critically underfunded, with only a third of hospice funding coming from the government and the rest relying on charitable efforts. This funding gap reflects a societal undervaluation of end-of-life care compared to high-tech medical interventions, raising ethical questions about priorities and the equitable distribution of resources.

The article also sheds light on stark inequities in access to palliative and bereavement services, particularly for marginalized communities. Ethnic minority groups face additional barriers, such as language difficulties, inadequate outreach, and a lack of culturally competent care providers. Trust in healthcare professionals for end-of-life care is significantly lower among ethnic minority groups (30%) compared to white individuals (17%), highlighting systemic failures in addressing the needs of diverse populations. These disparities underscore the ethical imperative to ensure equitable access to compassionate, culturally sensitive care for all.

Ultimately, the piece calls for a reevaluation of societal and healthcare priorities, emphasizing the need to balance life extension with quality of life and dignity in dying. It advocates for increased government funding, culturally competent care, and a shift in values to prioritize compassion and equity in end-of-life care. These issues are not only practical but deeply ethical, reflecting broader questions about how societies value and care for their most vulnerable members.

Monday, February 10, 2025

Consent and Compensation: Resolving Generative AI’s Copyright Crisis

Pasquale, F., & Sun, H. (2024).
SSRN Electronic Journal.

Abstract

Generative artificial intelligence (AI) has the potential to augment and democratize creativity. However, it is undermining the knowledge ecosystem that now sustains it. Generative AI may unfairly compete with creatives, displacing them in the market. Most AI firms are not compensating creative workers for composing the songs, drawing the images, and writing both the fiction and non-fiction books that their models need in order to function. AI thus threatens not only to undermine the livelihoods of authors, artists, and other creatives, but also to destabilize the very knowledge ecosystem it relies on.

Alarmed by these developments, many copyright owners have objected to the use of their works by AI providers. To recognize and empower their demands to stop non-consensual use of their works, we propose a streamlined opt-out mechanism that would require AI providers to remove objectors’ works from their databases once copyright infringement has been documented. Those who do not object still deserve compensation for the use of their work by AI providers. We thus also propose a levy on AI providers, to be distributed to the copyright owners whose work they use without a license. This scheme is designed to ensure creatives receive a fair share of the economic bounty arising out of their contributions to AI. Together these mechanisms of consent and compensation would result in a new grand bargain between copyright owners and AI firms, designed to ensure both thrive in the long-term.

Here are some thoughts.

This essay discusses the copyright challenges presented by generative artificial intelligence (AI). It argues that AI's ability to create content and replicate existing works threatens the livelihoods of authors and other creatives, destabilizing the knowledge ecosystem that AI relies on. The authors propose a legislative solution involving an opt-out mechanism that would allow copyright owners to remove their works from AI training databases and a levy on AI providers to compensate copyright owners whose work is used without a license.

The essay emphasizes the urgency of addressing the issue, asserting that the free use of copyrighted works by AI providers devalues human creativity and could undermine AI's future development by removing incentives for creating the training data it needs. It highlights the disruption of the knowledge ecosystem caused by the opacity and scale of AI systems, which erodes authors' control over their works. The authors point out that AI firms are unlikely to offer compensation for the use of copyrighted works.

Ultimately, the essay advocates for a new agreement between copyright owners and AI firms, facilitated by the proposed mechanisms of consent and compensation. This would ensure the long-term viability of both AI and the human creative input it depends on. The authors believe that their proposed framework offers a promising legislative solution to the copyright problems created by new technological uses of works.

Sunday, February 9, 2025

Does Morality do us any good

Nikhil Kishnan
The New Yorker
Originally published 23 Dec 24

Here is an excerpt:

As things became more unequal, we developed a paradoxical aversion to inequality. In time, patterns began to appear that are still with us. Kinship and hierarchy were replaced or augmented by coöperative relationships that individuals entered into voluntarily—covenants, promises, and the economically essential contracts. The people of Europe, at any rate, became what Joseph Henrich, the Harvard evolutionary biologist and anthropologist, influentially termed “WEIRD”: Western, educated, industrialized, rich, and democratic. WEIRD people tend to believe in moral rules that apply to every human being, and tend to downplay the moral significance of their social communities or personal relations. They are, moreover, much less inclined to conform to social norms that lack a moral valence, or to defer to such social judgments as shame and honor, but much more inclined to be bothered by their own guilty consciences.

That brings us to the past fifty years, decades that inherited the familiar structures of modernity: capitalism, liberal democracy, and the critics of these institutions, who often fault them for failing to deliver on the ideal of human equality. The civil-rights struggles of these decades have had an urgency and an excitement that, Sauer writes, make their supporters think victory will be both quick and lasting. When it is neither, disappointment produces the “identity politics” that is supposed to be the essence of the present cultural moment.

His final chapter, billed as an account of the past five years, connects disparate contemporary phenomena—vigilance about microaggressions and cultural appropriation, policies of no-platforming—as instances of the “punitive psychology” of our early hominin ancestors. Our new sensitivities, along with the twenty-first-century terms they’ve inspired (“mansplaining,” “gaslighting”), guide us as we begin to “scrutinize the symbolic markers of our group membership more and more closely and to penalize any non-compliance.” We may have new targets, Sauer says, but the psychology is an old one.


Here are some thoughts:

Understanding the origins of human morality is relevant for practicing psychologists, as it provides important insights into the psychological foundations of our moral behaviors and professional social interactions. These insight include working with patients and our own ethical code. The article explores how our moral intuitions have evolved over millions of years, revealing that our current moral frameworks are not fixed absolutes, but dynamic systems shaped by biological and social processes. Other scholars have conceptualized morality in similar ways, such as Haidt, DeWaal, and Tomasello.

Hanno Sauer's work illuminates a similar journey of moral development, tracing how early human survival strategies of cooperation and altruism gradually transformed into complex ethical systems. Psychologists can gain insights from this evolutionary perspective, understanding that our moral convictions are deeply rooted in our species' adaptive mechanisms rather than being purely rational constructs.

The article highlights several key insights:
  • Moral beliefs are significantly influenced by social context and evolutionary history
  • Our moral intuitions often precede rational justification
  • Cooperation and punishment played crucial roles in shaping human moral psychology
  • Universal moral values exist across different cultures, despite apparent differences
Particularly compelling is the exploration of how our "punitive psychology" emerged as a mechanism for social regulation, demonstrating how psychological processes have been instrumental in creating societal norms. For practicing psychologists, this understanding can provide a more nuanced approach to understanding patient behaviors, moral reasoning, and the complex interplay between individual experiences and broader evolutionary patterns. Notably, morality is always contextual, as I have pointed out in other summaries.

Finally, the article offers an optimistic perspective on moral progress, suggesting that our fundamental values are more aligned than we might initially perceive. This insight can be helpful for psychologists working with individuals from diverse backgrounds, emphasizing our shared psychological and evolutionary heritage.

Saturday, February 8, 2025

AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking

Gerlich, M. (2025).
Societies, 15(1), 6.

Abstract

The proliferation of artificial intelligence (AI) tools has transformed numerous aspects of daily life, yet its impact on critical thinking remains underexplored. This study investigates the relationship between AI tool usage and critical thinking skills, focusing on cognitive offloading as a mediating factor. Utilising a mixed-method approach, we conducted surveys and in-depth interviews with 666 participants across diverse age groups and educational backgrounds. Quantitative data were analysed using ANOVA and correlation analysis, while qualitative insights were obtained through thematic analysis of interview transcripts. The findings revealed a significant negative correlation between frequent AI tool usage and critical thinking abilities, mediated by increased cognitive offloading. Younger participants exhibited higher dependence on AI tools and lower critical thinking scores compared to older participants. Furthermore, higher educational attainment was associated with better critical thinking skills, regardless of AI usage. These results highlight the potential cognitive costs of AI tool reliance, emphasising the need for educational strategies that promote critical engagement with AI technologies. This study contributes to the growing discourse on AI’s cognitive implications, offering practical recommendations for mitigating its adverse effects on critical thinking. The findings underscore the importance of fostering critical thinking in an AI-driven world, making this research essential reading for educators, policymakers, and technologists.

Here are some thoughts:

"De-skilling" is a concern regarding LLMs. Gerlich explores the critical relationship between AI tool usage and critical thinking skills. The study investigates how artificial intelligence technologies impact cognitive processes, with a specific focus on cognitive offloading as a mediating factor.

Gerlich conducted a comprehensive mixed-method research involving 666 participants from diverse age groups and educational backgrounds. The study employed surveys and in-depth interviews, analyzing data through ANOVA and correlation analysis, alongside thematic interview transcript analysis. Key findings revealed a significant negative correlation between frequent AI tool usage and critical thinking abilities, particularly pronounced among younger participants.

The research highlights several important insights. Younger participants demonstrated higher dependence on AI tools and correspondingly lower critical thinking scores compared to older participants. Conversely, individuals with higher educational attainment maintained better critical thinking skills regardless of their AI tool usage. These findings underscore the potential cognitive costs associated with excessive reliance on AI technologies.

The study's broader implications are important. It emphasizes the need for educational strategies that promote critical engagement with AI technologies, warning against the risk of cognitive offloading—where individuals delegate cognitive tasks to external tools, potentially reducing their capacity for deep, reflective thinking. By exploring how AI tools influence cognitive processes, the research contributes to the growing discourse on technology's impact on human cognitive development.

Gerlich's work is particularly significant as it offers practical recommendations for mitigating adverse effects on critical thinking in an increasingly AI-driven world. The research serves as essential reading for educators, policymakers, and technologists seeking to understand and address the complex relationship between artificial intelligence and human cognitive skills.

Friday, February 7, 2025

Physicians’ ethical concerns about artificial intelligence in medicine: a qualitative study: “The final decision should rest with a human”

Kahraman, F.,  et al. (2024).
Frontiers in Public Health, 12.

Abstract

Background/aim: Artificial Intelligence (AI) is the capability of computational systems to perform tasks that require human-like cognitive functions, such as reasoning, learning, and decision-making. Unlike human intelligence, AI does not involve sentience or consciousness but focuses on data processing, pattern recognition, and prediction through algorithms and learned experiences. In healthcare including neuroscience, AI is valuable for improving prevention, diagnosis, prognosis, and surveillance.

Methods: This qualitative study aimed to investigate the acceptability of AI in Medicine (AIIM) and to elucidate any technical and scientific, as well as social and ethical issues involved. Twenty-five doctors from various specialties were carefully interviewed regarding their views, experience, knowledge, and attitude toward AI in healthcare.

Results: Content analysis confirmed the key ethical principles involved: confidentiality, beneficence, and non-maleficence. Honesty was the least invoked principle. A thematic analysis established four salient topic areas, i.e., advantages, risks, restrictions, and precautions. Alongside the advantages, there were many limitations and risks. The study revealed a perceived need for precautions to be embedded in healthcare policies to counter the risks discussed. These precautions need to be multi-dimensional.

Conclusion: The authors conclude that AI should be rationally guided, function transparently, and produce impartial results. It should assist human healthcare professionals collaboratively. This kind of AI will permit fairer, more innovative healthcare which benefits patients and society whilst preserving human dignity. It can foster accuracy and precision in medical practice and reduce the workload by assisting physicians during clinical tasks. AIIM that functions transparently and respects the public interest can be an inspiring scientific innovation for humanity.

Here are some thoughts:

The integration of Artificial Intelligence (AI) in healthcare presents a complex landscape of potential benefits and significant ethical concerns. On one hand, AI offers advantages such as error reduction, increased diagnostic speed, and the potential to alleviate the workload of healthcare professionals, allowing them more time for complex cases and patient interaction. These advancements could lead to improved patient outcomes and more efficient healthcare delivery.

However, ethical issues loom large. Privacy is a paramount concern, as the sensitive nature of patient data necessitates robust security measures to prevent misuse. The question of responsibility in AI-driven decision-making is also fraught with ambiguity, raising legal and ethical dilemmas about accountability in case of errors.

There is a legitimate fear of unemployment among healthcare professionals, though it is more about AI augmenting rather than replacing human capabilities. The human touch in medicine, encompassing empathy and trust-building, is irreplaceable and must be preserved.

Education and regulation are crucial for the ethical integration of AI. Healthcare professionals and patients need to understand AI's role and limitations, with clear rules to ensure ethical use. Bias in AI algorithms, potentially exacerbating health disparities, must be addressed through diverse development teams and continuous monitoring.

Transparency is essential for trust, with patients informed about AI's role in their care and doctors capable of explaining AI decisions. Legal implications, such as data ownership and patient consent, require policy attention.

Economically, AI could enhance healthcare efficiency, but its impact on costs and accessibility needs careful consideration. International collaboration is vital for uniform standards and fairness globally.