Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Friday, January 31, 2025

Creating ‘Mirror Life’ Could Be Disastrous, Scientists Warn

Simon Makin
Scientific American
Originally posted 14 DEC 24

A category of synthetic organisms dubbed “mirror life,” whose component molecules are mirror images of their natural counterpart, could pose unprecedented risks to human life and ecosystems, according to a perspective article by leading experts, including Nobel Prize winners. The article, published in Science on December 12, is accompanied by a lengthy report detailing their concerns.

Mirror life has to do with the ubiquitous phenomenon in the natural world in which a molecule or another object cannot simply be superimposed on another. For example, your left hand can’t simply be turned over to match your right hand. This handedness is encountered throughout the natural world.

Groups of molecules of the same type tend to have the same handedness. The nucleotides that make up DNA are nearly always right-handed, for instance, while proteins are composed of left-handed amino acids.

Handedness, more formally known as chirality, is hugely important in biology because interactions between biomolecules rely on them having the expected form. For example, if a protein’s handedness is reversed, it cannot interact with partner molecules, such as receptors on cells. “Think of it like hands in gloves,” says Katarzyna Adamala, a synthetic biologist at the University of Minnesota and a co-author of the article and the accompanying technical report, which is almost 300 pages long. “My left glove won’t fit my right hand.”


Here are some thoughts:

Oh great, another existential risk.

Scientists are sounding the alarm about the potential risks of creating "mirror life," synthetic biological systems with mirrored molecular structures. Researchers have long explored mirror life's possibilities in medicine, biotechnology and other fields. However, experts now warn that unleashing these synthetic organisms could have disastrous consequences.

Mirror life forms may interact unpredictably with natural organisms, disrupting ecosystems and causing irreparable damage. Furthermore, synthetic systems could inadvertently amplify harmful pathogens or toxins, posing significant threats to human health. Another concern is uncontrolled evolution, where mirror life could mutate and spread uncontrollably. Additionally, synthetic organisms may resist decomposition, persisting in environments and potentially causing long-term harm.

To mitigate these risks, scientists advocate a precautionary approach, emphasizing cautious research and regulation. Thorough risk assessments must be conducted before releasing mirror life into the environment. Researchers also stress the need for containment strategies to prevent unintended spread. By taking a cautious stance, scientists hope to prevent potential catastrophes.

Mirror life research aims to revolutionize various fields, including medicine and biotechnology. However, experts urge careful consideration to avoid unforeseen consequences. As science continues to advance, addressing these concerns will be crucial in ensuring responsible development and minimizing risks associated with mirror life.

Thursday, January 30, 2025

Advancements in AI-driven Healthcare: A Comprehensive Review of Diagnostics, Treatment, and Patient Care Integration

Kasula, B. Y. (2024, January 18).
International Journal of Machine Learning
for Sustainable Development.
Volume 6 (1).

Abstract

This research paper presents a comprehensive review of the recent advancements in AI-
driven healthcare, focusing on diagnostics, treatment, and the integration of AI technologies in
patient care. The study explores the evolution of artificial intelligence applications in medical
imaging, diagnosis accuracy, personalized treatment plans, and the overall enhancement of
healthcare delivery. Ethical considerations and challenges associated with AI adoption in
healthcare are also discussed. The paper concludes with insights into the potential future
developments and the transformative impact of AI on the healthcare landscape.


Here are some thoughts:

This research paper provides a comprehensive review of recent advancements in AI-driven healthcare, focusing on diagnostics, treatment, and the integration of AI technologies in patient care. The study explores the evolution of artificial intelligence applications in medical imaging, diagnosis accuracy, personalized treatment plans, and the overall enhancement of healthcare delivery. It discusses the transformative impact of AI on healthcare, highlighting key achievements, challenges, and ethical considerations associated with its widespread adoption.

The paper examines AI's role in improving diagnostic accuracy, particularly in medical imaging, and its contribution to developing personalized treatment plans. It also addresses the ethical dimensions of AI in healthcare, including patient privacy, data security, and equitable distribution of AI-driven healthcare benefits. The research emphasizes the need for a holistic approach to AI integration in healthcare, calling for collaboration between healthcare professionals, technologists, and policymakers to navigate the evolving landscape successfully.

It is important for psychologists to understand the content of this article for several reasons. Firstly, AI is increasingly being applied in mental health diagnosis and treatment, as mentioned in the paper's references. Psychologists need to be aware of these advancements to stay current in their field and potentially incorporate AI-driven tools into their practice. Secondly, the ethical considerations discussed in the paper, such as patient privacy and data security, are equally relevant to psychological practice. Understanding these issues can help psychologists navigate the ethical challenges that may arise with the integration of AI in mental health care.

Moreover, the paper's emphasis on personalized medicine and treatment plans is particularly relevant to psychology, where individualized approaches are often crucial. By understanding AI's potential in this area, psychologists can explore ways to enhance their treatment strategies and improve patient outcomes. Lastly, as healthcare becomes increasingly interdisciplinary, psychologists need to be aware of technological advancements in other medical fields to collaborate effectively with other healthcare professionals and provide comprehensive care to their patients.

Wednesday, January 29, 2025

AI has an environmental problem.

Here’s what the world can do about that.

UN Environment Programme
Originally posted 21 Sept 24

There are high hopes that artificial intelligence (AI) can help tackle some of the world’s biggest environmental emergencies. Among other things, the technology is already being used to map the destructive dredging of sand and chart emissions of methane, a potent greenhouse gas.  

But when it comes to the environment, there is a negative side to the explosion of AI and its associated infrastructure, according to a growing body of research. The proliferating data centres that house AI servers produce electronic waste. They are large consumers of water, which is becoming scarce in many places. They rely on critical minerals and rare elements, which are often mined unsustainably. And they use massive amounts of electricity, spurring the emission of planet-warming greenhouse gases.  

“There is still much we don’t know about the environmental impact of AI but some of the data we do have is concerning,” said Golestan (Sally) Radwan, the Chief Digital Officer of the United Nations Environment Programme (UNEP). “We need to make sure the net effect of AI on the planet is positive before we deploy the technology at scale.”  

This week, UNEP released an issue note that explores AI’s environmental footprint and considers how the technology can be rolled out sustainably. It follows a major UNEP report, Navigating New Horizons, which also examined AI’s promise and perils. Here’s what those publications found.


Here are some thoughts:

The article discusses the significant environmental impact of artificial intelligence (AI) technologies and proposes solutions to mitigate these effects. AI systems, particularly those requiring substantial computational power, consume vast amounts of energy, often sourced from non-renewable resources, contributing to carbon emissions. Data centers, which host AI operations, also demand considerable energy and water for cooling. Moreover, the production of AI hardware, such as GPUs and servers, involves the extraction of rare earth metals, leading to environmental damage, and the disposal of this hardware contributes to electronic waste.

The article likely suggests several strategies to address these issues, including the development of energy-efficient AI algorithms and hardware, the use of renewable energy sources to power data centers, and the implementation of sustainable practices in hardware production and disposal. It may also advocate for policies that regulate the environmental impact of AI technologies.

Stakeholders, including governments, corporations, and researchers, are probably emphasized as crucial players in creating sustainable AI ecosystems. The importance of public awareness and consumer pressure in driving the industry towards greener practices is likely highlighted as well.

From an ethical standpoint, the article underscores the responsibility of AI developers and companies to minimize environmental harm, balancing technological progress with ecological sustainability. It raises concerns about intergenerational equity, urging sustainable practices to protect the planet for future generations. Corporate accountability is another key ethical consideration, emphasizing the need for tech companies to prioritize environmental sustainability. The role of policy and governance is also stressed, with a call for regulatory frameworks to ensure ethical AI development. Lastly, the article likely emphasizes the moral duty of consumers to demand and be informed about greener AI technologies.

Tuesday, January 28, 2025

A Personalized Patient Preference Predictor for Substituted Judgments in Healthcare: Technically Feasible and Ethically Desirable

Earp, B. D., et al.
The American Journal of Bioethics, 24(7),
13–26.

Abstract

When making substituted judgments for incapacitated patients, surrogates often struggle to guess what the patient would want if they had capacity. Surrogates may also agonize over having the (sole) responsibility of making such a determination. To address such concerns, a Patient Preference Predictor (PPP) has been proposed that would use an algorithm to infer the treatment preferences of individual patients from population-level data about the known preferences of people with similar demographic characteristics. However, critics have suggested that even if such a PPP were more accurate, on average, than human surrogates in identifying patient preferences, the proposed algorithm would nevertheless fail to respect the patient’s (former) autonomy since it draws on the ‘wrong’ kind of data: namely, data that are not specific to the individual patient and which therefore may not reflect their actual values, or their reasons for having the preferences they do. Taking such criticisms on board, we here propose a new approach: the Personalized Patient Preference Predictor (P4). The P4 is based on recent advances in machine learning, which allow technologies including large language models to be more cheaply and efficiently ‘fine-tuned’ on person-specific data. The P4, unlike the PPP, would be able to infer an individual patient’s preferences from material (e.g., prior treatment decisions) that is in fact specific to them. Thus, we argue, in addition to being potentially more accurate at the individual level than the previously proposed PPP, the predictions of a P4 would also more directly reflect each patient’s own reasons and values. In this article, we review recent discoveries in artificial intelligence research that suggest a P4 is technically feasible, and argue that, if it is developed and appropriately deployed, it should assuage some of the main autonomy-based concerns of critics of the original PPP. We then consider various objections to our proposal and offer some tentative replies.


Here are some thoughts:

This article introduces the concept of a Personalized Patient Preference Predictor (P4), an advanced version of the previously proposed Patient Preference Predictor (PPP). The P4 is designed to address the challenges of making substituted judgments for incapacitated patients in healthcare settings. Unlike the PPP, which relies on population-level data to predict patient preferences, the P4 utilizes machine learning and large language models to analyze person-specific data, such as prior treatment decisions and digital footprints, to more accurately infer an individual patient's preferences.

The authors argue that the P4 is both technically feasible and ethically desirable, as it addresses some of the main criticisms of the original PPP. By using individual-specific data, the P4 aims to better reflect each patient's own reasons and values, potentially improving the accuracy of substituted judgments while respecting patient autonomy. The article discusses the technical aspects of implementing a P4, including the use of advanced AI technologies, and considers various ethical objections and potential responses.

It is important for psychologists to understand the content of this article for several reasons. First, the P4 represents a significant advancement in the field of medical decision-making for incapacitated patients, which has implications for patient care, autonomy, and mental health. Psychologists working in healthcare settings may encounter situations where such tools could be valuable in guiding treatment decisions. Second, the ethical considerations surrounding the use of AI and machine learning in healthcare decision-making are crucial for psychologists to grasp, as they may be called upon to contribute to discussions about the implementation and use of such technologies. Finally, understanding the potential of personalized predictive models like the P4 could inform psychological research and practice, particularly in areas related to decision-making, patient preferences, and the intersection of technology and mental health care

Monday, January 27, 2025

Beyond rating scales: With targeted evaluation, large language models are poised for psychological assessment

Kjell, O. N., Kjell, K., & Schwartz, H. A. (2023).
Psychiatry Research, 333, 115667.

Abstract

In this narrative review, we survey recent empirical evaluations of AI-based language assessments and present a case for the technology of large language models to be poised for changing standardized psychological assessment. Artificial intelligence has been undergoing a purported “paradigm shift” initiated by new machine learning models, large language models (e.g., BERT, LAMMA, and that behind ChatGPT). These models have led to unprecedented accuracy over most computerized language processing tasks, from web searches to automatic machine translation and question answering, while their dialogue-based forms, like ChatGPT have captured the interest of over a million users. The success of the large language model is mostly attributed to its capability to numerically represent words in their context, long a weakness of previous attempts to automate psychological assessment from language. While potential applications for automated therapy are beginning to be studied on the heels of chatGPT's success, here we present evidence that suggests, with thorough validation of targeted deployment scenarios, that AI's newest technology can move mental health assessment away from rating scales and to instead use how people naturally communicate, in language.

Highlights

• Artificial intelligence has been undergoing a purported “paradigm shift” initiated by new machine learning models, large language models.

• We review recent empirical evaluations of AI-based language assessments and present a case for the technology of large language models, that are used for chatGPT and BERT, to be poised for changing standardized psychological assessment.

• While potential applications for automated therapy are beginning to be studied on the heels of chatGPT's success, here we present evidence that suggests, with thorough validation of targeted deployment scenarios, that AI's newest technology can move mental health assessment away from rating scales and to instead use how people naturally communicate, in language.

Here are some thoughts:

The article underscores the transformative role of machine learning (ML) and artificial intelligence (AI) in psychological assessment, marking a significant shift in how psychologists approach their work. By integrating these technologies, assessments can become more accurate, efficient, and scalable, enabling psychologists to analyze vast amounts of data and uncover patterns that might otherwise go unnoticed. This is particularly important in improving diagnostic accuracy, as AI can help mitigate human bias and subjectivity, providing data-driven insights that complement clinical judgment. However, the adoption of these tools also raises critical ethical and practical considerations, such as ensuring client privacy, data security, and the responsible use of AI in alignment with professional standards.

As AI becomes more prevalent, the role of psychologists is evolving, requiring them to collaborate with these technologies by focusing on interpretation, contextual understanding, and therapeutic decision-making, while maintaining their unique human expertise.

Looking ahead, the article highlights emerging trends like natural language processing (NLP) for analyzing speech and text, as well as wearable devices for real-time behavioral and physiological data collection, offering psychologists innovative methods to enhance their practice. These advancements not only improve the precision of assessments but also pave the way for more personalized and timely interventions, ultimately supporting better mental health outcomes for clients.

Sunday, January 26, 2025

FDA Approves Spravato Nasal Spray for Treatment-Resistant Depression

Physicians Weekly
Originally published 23 Jan 25

The U.S. Food and Drug Administration has approved Spravato (esketamine) CIII nasal spray for adults living with major depressive disorder who have had an inadequate response to at least two oral antidepressants, according to a news release issued by Johnson & Johnson.

Spravato is the first and only approved monotherapy for adults with refractory major depressive disorder. Approval of Spravato, granted following FDA priority review, was based on the results of a randomized, double-blind, multicenter, placebo-controlled trial. On day 28 of the trial, patients taking Spravato exhibited numerical improvements for all 10 items on the Montgomery-Asberg Depression Rating Scale (MADRS). After four weeks, 22.5 percent of patients taking Spravato achieved remission (score ≤12 on MADRS) compared with 7.6 percent of patients taking placebo.

Spravato nasal spray is administered by the patient under the supervision of a health care provider in a health care setting. Spravato targets the neurotransmitter glutamate; however, the mechanism by which esketamine exerts its antidepressant effect is unknown. In an effort to ensure the safe and appropriate use of Spravato, the medication is only available through a restricted program called the Spravato Risk Evaluation and Mitigation Strategy Program. This is due to the risks for serious adverse outcomes resulting from sedation, dissociation, respiratory depression, abuse, and misuse.


Here are the basics: 

Spravato® (esketamine) nasal spray is an innovative FDA-approved medication designed to treat adults with treatment-resistant depression (TRD) who have not responded to at least two other antidepressant therapies, as well as those with major depressive disorder (MDD) experiencing suicidal thoughts or actions. Unlike traditional antidepressants, which primarily target serotonin or norepinephrine, Spravato works as a non-competitive N-methyl-D-aspartate (NMDA) receptor antagonist, modulating the glutamate system in the brain to provide rapid relief. This unique mechanism allows some patients to experience improvement in depressive symptoms within hours or days, making it a valuable option for those in urgent need of relief.

Administered as a nasal spray, Spravato is taken under the supervision of a healthcare provider in certified treatment centers to ensure safety and proper monitoring. The treatment regimen typically begins with twice-weekly doses during the first four weeks, followed by a maintenance phase with less frequent dosing. Due to potential side effects such as dissociation, dizziness, sedation, and increased blood pressure, patients are monitored for at least two hours after each administration. Spravato is not recommended for individuals with certain medical conditions, including aneurysmal vascular disease or uncontrolled hypertension.

To ensure safe use, Spravato is available only through a restricted distribution program called the Spravato REMS (Risk Evaluation and Mitigation Strategy). This program helps healthcare providers and patients navigate the treatment process while minimizing risks. By offering a rapid-acting alternative for severe depression, Spravato represents a significant advancement in mental health care, providing hope for patients who have not found success with conventional therapies.

Saturday, January 25, 2025

Mental health apps need a complete redesign

Benjamin Kaveladze
Statnews.com
Originally posted 9 Dec 2024

The internet has transformed the ways we access mental health support. Today, anyone with a computer or smartphone can use digital mental health interventions (DMHIs) like Calm for insomnia, PTSD Coach for post-traumatic stress, and Sesame Street’s Breathe, Think, Do with Sesame for anxious kids. Given that most people facing mental illness don’t access professional help through traditional sources like therapists or psychiatrists, DMHIs’ promise to provide effective and trustworthy support globally and equitably is a big deal.

But before consumer DMHIs can transform access to effective support, they must overcome an urgent problem: Most people don’t want to use them. Our best estimate is that 96% of people who download a mental health app will have entirely stopped using it just 15 days later. The field of digital mental health has been trying to tackle this profound engagement problem for years, with little progress. As a result, the wave of pandemic-era excitement and funding for digital mental health is drying up. To advance DMHIs toward their promise of global impact, we need a revolution in these tools’ design.


Here are some thoughts:

This article highlights the critical engagement challenges faced by digital mental health interventions (DMHIs), with 96% of users discontinuing app use within 15 days. This striking statistic points to a need for a fundamental redesign of mental health apps, which currently rely heavily on outdated and conventional approaches reminiscent of 1990s self-help handbooks. The author argues that DMHIs suffer from a lack of creative innovation, as developers have been constrained by traditional therapeutic frameworks, failing to explore the broader potential of technology to effect psychological change.

To address these issues, Kaveladze calls for a radical shift in DMHI design, advocating for the integration of insights from fields like video game design, advertising, and social media content creation. These disciplines excel in engaging users and could provide valuable strategies for creating more appealing and effective mental health tools. This opinion piece also emphasizes the importance of rigorous evaluation processes to ensure new DMHIs are not only effective but also safe, protecting users from potential harms, including privacy breaches and unintended psychological effects.

Psychologists should take note of these concerns and opportunities. When recommending mental health apps to clients, clinicians must critically assess the app's ability to sustain engagement and its adherence to evidence-based practices. Privacy and safety should be paramount considerations, particularly given the sensitive nature of mental health data. Furthermore, psychologists have an essential role to play in guiding the development and evaluation of DMHIs to ensure they meet ethical and clinical standards. Collaborative efforts between clinicians and technology developers could lead to tools that are both innovative and aligned with the needs of diverse populations, including those with limited access to traditional mental health services.

Friday, January 24, 2025

Ethical Considerations for Using AI to Predict Suicide Risk

Faith Wershba
The Hastings Center
Originally published 9 Dec 24

Those who have lost a friend or family member to suicide frequently express remorse that they did not see it coming. One often hears, “I wish I would have known” or “I wish I could have done something to help.” Suicide is one of the leading causes of death in the United States, and with suicide rates rising, the need for effective screening and prevention strategies is urgent.

Unfortunately, clinician judgement has not proven very reliable when it comes to predicting patients’ risk of attempting suicide. A 2016 meta-analysis from the American Psychological Association concluded that, on average, clinicans’ ability to predict suicide risk was no better than chance. Predicting suicide risk is a complex and high-stakes task, and while there are a number of known risk factors that correlate with suicide attempts at the population level, the presence or absence of a given risk factor may not reliably predict an individual’s risk of attempting suicide. Moreover, there are likely unknown risk factors that interact to modify risk. For these reasons, patients who qualify as high-risk may not be identified by existing assessments.

Can AI do better? Some researchers are trying to find out by turning towards big data and machine learning algorithms. These algorithms are trained on medical records from large cohorts of patients who have either attempted or committed suicide (“cases”) or who have never attempted suicide (“controls”). An algorithm combs through this data to identify patterns and extract features that correlate strongly with suicidality, updating itself continuously to increase predictive accuracy. Once the algorithm has been sufficiently trained and refined on test data, the hope is that it can be applied to predict suicide risk in individual patients.


Here are some thoughts:

The article explores the potential benefits and ethical challenges associated with leveraging artificial intelligence (AI) in suicide risk assessment. AI algorithms, which analyze extensive patient data to identify patterns indicating heightened suicide risk, hold promise for enhancing early intervention efforts. However, the integration of AI into clinical practice raises significant ethical and practical considerations that psychologists must navigate.

One critical concern is the accuracy and reliability of AI predictions. While AI has demonstrated potential in identifying suicide risk, its outputs are not infallible. Overreliance on AI without applying clinical judgment may result in false positives or negatives, potentially undermining the quality of care provided to patients. Psychologists must balance AI insights with their expertise to ensure accurate and ethical decision-making.

Informed consent and respect for patient autonomy are also paramount. Transparency about how AI tools are used and obtaining explicit consent from patients ensures trust and adherence to ethical principles. 

Bias and fairness represent another challenge, as AI algorithms can reflect biases present in the training data. These biases may lead to unequal treatment of different demographic groups, necessitating ongoing monitoring and adjustments to ensure equitable care. Furthermore, AI should be viewed as a tool to complement, not replace, the clinical judgment of psychologists. Integrating AI insights into a holistic approach to care is critical for addressing the complexities of suicide risk.

Finally, the use of AI raises questions about legal and ethical accountability. Determining responsibility for decisions influenced by AI predictions requires clear guidelines and policies. Psychologists must remain vigilant in ensuring that AI use aligns with both ethical standards and the best interests of their patients.

Thursday, January 23, 2025

The moral dimension to America’s flawed health care system

Nicole Hassoun
The Conversation
Originally published 19 Dec 24

The killing of UnitedHealthcare CEO Brian Thompson has set off soul-searching among many Americans. Part of that reflection is about the public reaction to Thompson’s death and the sympathy the suspect received online, with some people critical of the insurance industry celebrating the assailant as a sort of folk hero.

As many observers have pointed out, frustrations are no excuse for murder. But it has become a moment of wider reflection on health care in America, and why so many patients feel the system is broken.

Philosopher Nicole Hassoun researches health care and human rights. The Conversation U.S. spoke with her about the deeper questions Americans should be asking when they discuss health care reform.

We’re seeing an outpouring of anger about health care in the United States. Your work deals with global health inequality and access – can you help put the U.S. system in perspective?


Here are some thoughts:

The article discusses the moral implications of the U.S. health care system's deficiencies. It highlights that the U.S. spends more on health care than other high-income countries but has poorer health outcomes, such as lower life expectancy and higher infant mortality rates. The article argues that these issues are not just policy failures but also moral failings, as they reflect a lack of commitment to ensuring equitable access to health care for all citizens. The author calls for a reevaluation of the health care system to address these moral concerns and improve overall health outcomes. 

Wednesday, January 22, 2025

Cognitive biases and artificial intelligence.

Wang, J., & Redelmeier, D. A. (2024).
NEJM AI, 1(12).

Abstract

Generative artificial intelligence (AI) models are increasingly utilized for medical applications. We tested whether such models are prone to human-like cognitive biases when offering medical recommendations. We explored the performance of OpenAI generative pretrained transformer (GPT)-4 and Google Gemini-1.0-Pro with clinical cases that involved 10 cognitive biases and system prompts that created synthetic clinician respondents. Medical recommendations from generative AI were compared with strict axioms of rationality and prior results from clinicians. We found that significant discrepancies were apparent for most biases. For example, surgery was recommended more frequently for lung cancer when framed in survival rather than mortality statistics (framing effect: 75% vs. 12%; P<0.001). Similarly, pulmonary embolism was more likely to be listed in the differential diagnoses if the opening sentence mentioned hemoptysis rather than chronic obstructive pulmonary disease (primacy effect: 100% vs. 26%; P<0.001). In addition, the same emergency department treatment was more likely to be rated as inappropriate if the patient subsequently died rather than recovered (hindsight bias: 85% vs. 0%; P<0.001). One exception was base-rate neglect that showed no bias when interpreting a positive viral screening test (correction for false positives: 94% vs. 93%; P=0.431). The extent of these biases varied minimally with the characteristics of synthetic respondents, was generally larger than observed in prior research with practicing clinicians, and differed between generative AI models. We suggest that generative AI models display human-like cognitive biases and that the magnitude of bias can be larger than observed in practicing clinicians.

Here are some thoughts:

The research explores how AI systems, trained on human-generated data, often replicate cognitive biases such as confirmation bias, representation bias, and anchoring bias. These biases arise from flawed data, algorithmic design, and human interactions, resulting in inequitable outcomes in areas like recruitment, criminal justice, and healthcare. To address these challenges, the authors propose several strategies, including ensuring diverse and inclusive datasets, enhancing algorithmic transparency, fostering interdisciplinary collaboration among ethicists, developers, and legislators, and establishing regulatory frameworks that prioritize fairness, accountability, and privacy. They emphasize that while biases in AI reflect human cognitive tendencies, they have the potential to exacerbate societal inequalities if left unchecked. A holistic approach combining technological solutions with ethical and regulatory oversight is necessary to create AI systems that are equitable and socially beneficial.

This topic connects deeply to ethics, values, and psychology. Ethically, the replication of biases in AI challenges principles of fairness, justice, and equity, highlighting the need for responsible innovation that aligns AI systems with societal values to avoid perpetuating systemic discrimination. Psychologically, the biases in AI reflect human cognitive shortcuts, such as heuristics, which, while useful for individual decision-making, can lead to harmful outcomes when embedded into AI systems. By leveraging insights from psychology to identify and mitigate these biases, and grounding AI development in ethical principles, society can create technology that is both advanced and aligned with humanistic values.

Tuesday, January 21, 2025

The meaning crisis, and how we rescue young men from reactionary politics

Aaron Rabinowitz
The Skeptic
22nd November 2024

We need to talk about men. As of the most recent vote counts, 60% of white American men voted for Trump, compared with 53% of white women. While those are not particularly surprising results, 25% of the Black men and 48% of Latino men also voted for Trump, compared to just 10% of the Black women and 39% of Latino women. Trump has doubled his share of Black male voters, and across all racial demographics his gains were highest among younger men. As always, problems like this are intersectional and multifaceted, but one of the crucial facets we need to discuss is clearly the persistent problem of disaffected men.

One likely reason for these gains is that the GOP offers narratives for meaning-making that appeal to young men who feel that modern society is depriving them of a meaningful life. Researchers have tied the ongoing crisis of meaning for men to harmful personal and political choices that result in worse outcomes for men and everyone around them. If we are looking for things that the left can do to address this problem, we can start by adopting a restorative approach towards men in general and the crisis of meaning many of them are experiencing. 

This conversation is made far more difficult by the fact that conservatives like Jordan Peterson have dominated discourse around this topic – that conservative domination, combined with entirely understandable resentment and compassion fatigue towards men, leads many on the left to reject it as a problem worth considering. The common refrain is that men should just “suck it up”, and that “loss of privilege feels like oppression” – which is essentially a fancy way of saying men aren’t actually experiencing real problems, just bad vibes.

Vibes do matter though, and for an unfortunately large number of men, loss of privilege also feels like loss of meaning and purpose. Folks on the left have no trouble mocking Ben Shapiro for his thought-terminating cliché “facts don’t care about your feelings”, but whenever the issue of men’s feelings come up it is often tamped back down with facts about how things are actually perfectly fine for men right now, so people need to shut up about men’s feelings. But men’s feelings do matter, not just because men are people too, but also because having their feelings derided is driving a disturbing proportion of young men to find meaning in the worst possible places.

Here are some thoughts:

The article discusses the growing issue of disaffected men, particularly in the context of the recent US election, where 60% of white American men voted for Trump. This phenomenon is not limited to white men, as 25% of Black men and 48% of Latino men also voted for Trump.

The article suggests that one reason for this trend is that the GOP offers narratives that appeal to young men who feel deprived of a meaningful life. These narratives provide a sense of purpose and meaning, which is lacking in modern society. The article argues that the left needs to present alternative narratives that appeal to these men, rather than simply dismissing their concerns.

The article also highlights the issue of toxic masculinity and the need to dismantle patriarchal culture. It argues that men are socialized to conform to traditional masculine norms, which can lead to feelings of despair and disaffection. The article suggests that the left needs to adopt a more restorative approach, recognizing that men's feelings and needs matter, and that they deserve respect and compassion.

Ultimately, the article argues that the issue of disaffected men is a complex and deeply ingrained problem that requires a fundamental shift in our cultural and societal norms. It requires a move away from toxic masculinity and towards a more inclusive and compassionate understanding of masculinity.

Monday, January 20, 2025

The Human Core of AI: Navigating Our Anxieties and Ethics in the Digital Age

Jesse Hirsh
medium.com
Originally posted 25 FEB 24


Artificial Intelligence (AI) serves as a profound mirror reflecting not just our technological ambitions, but the complex tapestry of human anxieties, ethical dilemmas, and societal challenges. As we navigate the burgeoning landscape of AI, the discourse surrounding it often reveals more about us as a society and as individuals than it does about the technology itself. This is fundamentally about the human condition, our fears, our hopes, and our ethical compass.

AI as a Reflection of Human Anxieties

When we talk about controlling AI, at its core, this discussion encapsulates our fears of losing control — not over machines, but over the humans. The control over AI becomes a metaphor for our collective anxiety about unchecked power, the erosion of privacy, and the potential for new forms of exploitation. It’s an echo of our deeper concerns about how power is distributed and exercised in society.

Guardrails for AI as Guardrails for Humanity

The debate on implementing guardrails for AI is indeed a debate on setting boundaries for human behavior. It’s about creating a framework that ensures AI technologies are used ethically, responsibly, and for the greater good. These conversations underscore a pressing need to manage not just how machines operate, but how people use these tools — in ways that align with societal values and norms. Or perhaps guardrails are the wrong approach, as they limit what humans can do, not what machines can do.


Here are some thoughts:

The essay explores the relationship between Artificial Intelligence (AI) and humanity, arguing that AI reflects human anxieties, ethics, and societal challenges. It emphasizes that the discourse surrounding AI is more about human concerns than the technology itself. The author highlights the need to focus on human ethics, trust, and responsibility when developing and using AI, rather than viewing AI as a separate entity or threat.

This essay is important for psychologists for several reasons. Firstly, understanding human anxieties is crucial for psychologists to understand when working with clients who may be experiencing anxiety related to AI or technology. Secondly, the emphasis on human ethics and responsibility when developing and using AI is essential for psychologists to consider when using AI-powered tools in their practice.

Furthermore, the text's focus on trust and human connection in the context of AI is critical for psychologists to understand when building therapeutic relationships with clients who may be impacted by AI-related issues. By recognizing the interconnectedness of human trust and AI, psychologists can foster deeper and more meaningful relationships with their clients.

Lastly, the author's suggestion to use AI as a tool to reconnect with humanity resonates with psychologists' goals of promoting emotional connection, empathy, and understanding in their clients. By leveraging AI in a way that promotes human connection, clinical psychologists can help their clients develop more authentic and meaningful relationships with others.

Sunday, January 19, 2025

Artificial Intelligence for Psychotherapy: A Review of the Current State and Future Directions

Beg et al. (2024). 
Indian Journal of Psychological Medicine.

Abstract

Background:

Psychotherapy is crucial for addressing mental health issues but is often limited by accessibility and quality. Artificial intelligence (AI) offers innovative solutions, such as automated systems for increased availability and personalized treatments to improve psychotherapy. Nonetheless, ethical concerns about AI integration in mental health care remain.

Aim:

This narrative review explores the literature on AI applications in psychotherapy, focusing on their mechanisms, effectiveness, and ethical implications, particularly for depressive and anxiety disorders.
Methods:

A review was conducted, spanning studies from January 2009 to December 2023, focusing on empirical evidence of AI’s impact on psychotherapy. Following PRISMA guidelines, the authors independently screened and selected relevant articles. The analysis of 28 studies provided a comprehensive understanding of AI’s role in the field.

Results:

The results suggest that AI can enhance psychotherapy interventions for people with anxiety and depression, especially chatbots and internet-based cognitive-behavioral therapy. However, to achieve optimal outcomes, the ethical integration of AI necessitates resolving concerns about privacy, trust, and interaction between humans and AI.

Conclusion:

The study emphasizes the potential of AI-powered cognitive-behavioral therapy and conversational chatbots to address symptoms of anxiety and depression effectively. The article highlights the importance of cautiously integrating AI into mental health services, considering privacy, trust, and the relationship between humans and AI. This integration should prioritize patient well-being and assist mental health professionals while also considering ethical considerations and the prospective benefits of AI.

Here are some thoughts:

Artificial Intelligence (AI) is emerging as a promising tool in psychotherapy, offering innovative solutions to address mental health challenges. The comprehensive review explores the potential of AI-powered interventions, particularly for anxiety and depression disorders.

The study highlights several key insights about AI's role in mental health care. Researchers found that AI technologies like chatbots and internet-based cognitive-behavioral therapy (iCBT) can enhance psychological interventions by increasing accessibility and providing personalized treatment approaches. Machine learning, natural language processing, and deep learning are particularly crucial technologies enabling these advancements.

Despite the promising potential, the review emphasizes the critical need for careful integration of AI into mental health services. Ethical considerations remain paramount, with researchers stressing the importance of addressing privacy concerns, maintaining patient trust, and preserving the human element of therapeutic interactions. While AI can offer cost-effective and stigma-reducing solutions, it cannot yet fully replicate the profound empathy of face-to-face therapy.

The research examined 28 studies spanning from 2009 to 2023, revealing that AI interventions show particular promise in managing symptoms of anxiety and depression. Chatbots and iCBT demonstrated effectiveness in reducing psychological distress, though their impact on overall life satisfaction varies. The study calls for continued research to optimize AI's implementation in mental health care, balancing technological innovation with ethical principles.

Globally, organizations like the World Health Organization are developing regulatory frameworks to guide AI's responsible use in healthcare. In India, the Indian Council of Medical Research has already established guidelines for AI applications in biomedical research, signaling a growing recognition of this technology's potential.

Institutional Betrayal in Inpatient Psychiatry: Effects on Trust and Engagement With Care

Lewis, A., Lee, H. S., Zabelski, S., & 
Shields, M. C. (2024). Psychiatric Services.

Abstract

Objective:

Patients’ experiences of inpatient psychiatry have received limited empirical scrutiny. The authors examined patients’ likelihood of experiencing institutional betrayal (harmful actions or inactions toward patients) at facilities with for-profit, nonprofit, or government ownership; patient-level characteristics associated with experiencing institutional betrayal; associations between betrayal and patients’ trust in mental health providers; and associations between betrayal and patients’ willingness to engage in care postdischarge.

Methods:

Former psychiatric inpatients (N=814 adults) responded to an online survey. Data were collected on patients’ demographic characteristics; experiences of institutional betrayal; and the impact of psychiatric hospitalization on patients’ trust in providers, willingness to engage in care, and attendance at 30-day follow-up visits. Participants’ responses were linked to secondary data on facility ownership type.

Results:

Experiencing institutional betrayal was associated with less trust in mental health providers (25-percentage-point increase in reporting less trust, 95% CI=17–32), reduced willingness (by 45 percentage points, 95% CI=39–52) voluntarily undergo hospitalization, reduced willingness (by 30 percentage points, 95% CI=23–37) to report distressing thoughts to mental health providers, and lower probability of reporting attendance at a 30-day follow-up visit (11-percentage-point decrease, 95% CI=5–18). Participants treated at a for-profit facility were significantly more likely (by 14 percentage points) to report experiencing institutional betrayal than were those treated at a nonprofit facility (p=0.01).

Conclusions:

Institutional betrayal is one mechanism through which inpatient psychiatric facilities may cause iatrogenic harm, and the potential for betrayal was larger at for-profit facilities. Further research is needed to identify the determinants of institutional betrayal and strategies to support improvement in care quality.


Here are some thoughts:

The study found that patients were likely to experience institutional betrayal, defined as harmful actions or inactions toward patients by the facilities they depend on for care.

Key findings of the study include:
  1. Patients who experienced institutional betrayal during their inpatient psychiatric stay reported decreased trust in healthcare providers and organizations.
  2. Institutional betrayal was associated with reduced engagement with care following discharge from inpatient psychiatry.
  3. The period following discharge from inpatient psychiatry is characterized by elevated suicide risk, unplanned readmissions, and lack of outpatient follow-up care.
  4. The study highlights the importance of addressing institutional betrayal in psychiatric care settings to improve patient outcomes and trust in the healthcare system.
These findings suggest that institutional betrayal in inpatient psychiatric care can have significant negative effects on patients' trust in healthcare providers and their willingness to engage with follow-up care. Addressing these issues may be crucial for improving patient outcomes and reducing risks associated with the post-discharge period.

Saturday, January 18, 2025

Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum

Ayers, J. W., et al. (2023).
JAMA internal medicine, 183(6), 589–596.

Abstract

Importance
The rapid expansion of virtual health care has caused a surge in patient messages concomitant with more work and burnout among health care professionals. Artificial intelligence (AI) assistants could potentially aid in creating answers to patient questions by drafting responses that could be reviewed by clinicians.

Objective
To evaluate the ability of an AI chatbot assistant (ChatGPT), released in November 2022, to provide quality and empathetic responses to patient questions.

Design, Setting, and Participants
In this cross-sectional study, a public and nonidentifiable database of questions from a public social media forum (Reddit’s r/AskDocs) was used to randomly draw 195 exchanges from October 2022 where a verified physician responded to a public question. Chatbot responses were generated by entering the original question into a fresh session (without prior questions having been asked in the session) on December 22 and 23, 2022. The original question along with anonymized and randomly ordered physician and chatbot responses were evaluated in triplicate by a team of licensed health care professionals. Evaluators chose “which response was better” and judged both “the quality of information provided” (very poor, poor, acceptable, good, or very good) and “the empathy or bedside manner provided” (not empathetic, slightly empathetic, moderately empathetic, empathetic, and very empathetic). Mean outcomes were ordered on a 1 to 5 scale and compared between chatbot and physicians.

Results
Of the 195 questions and responses, evaluators preferred chatbot responses to physician responses in 78.6% (95% CI, 75.0%-81.8%) of the 585 evaluations. Mean (IQR) physician responses were significantly shorter than chatbot responses (52 [17-62] words vs 211 [168-245] words; t = 25.4; P < .001). Chatbot responses were rated of significantly higher quality than physician responses (t = 13.3; P < .001). The proportion of responses rated as good or very good quality (≥ 4), for instance, was higher for chatbot than physicians (chatbot: 78.5%, 95% CI, 72.3%-84.1%; physicians: 22.1%, 95% CI, 16.4%-28.2%;). This amounted to 3.6 times higher prevalence of good or very good quality responses for the chatbot. Chatbot responses were also rated significantly more empathetic than physician responses (t = 18.9; P < .001). The proportion of responses rated empathetic or very empathetic (≥4) was higher for chatbot than for physicians (physicians: 4.6%, 95% CI, 2.1%-7.7%; chatbot: 45.1%, 95% CI, 38.5%-51.8%; physicians: 4.6%, 95% CI, 2.1%-7.7%). This amounted to 9.8 times higher prevalence of empathetic or very empathetic responses for the chatbot.

Conclusions
In this cross-sectional study, a chatbot generated quality and empathetic responses to patient questions posed in an online forum. Further exploration of this technology is warranted in clinical settings, such as using chatbot to draft responses that physicians could then edit. Randomized trials could assess further if using AI assistants might improve responses, lower clinician burnout, and improve patient outcomes.

Here are some thoughts:

This is a document about the use of chatbots in healthcare. It discusses the use of chatbots to answer patient questions. The study found that chatbots were preferred over physicians and rated significantly higher for both quality and empathy. This research is important for psychologists to know because chatbots in the future may be able to answer questions about your practice in terms of informed consent, insurances accepted, and the type of services you provide. AI agents may be able to help psychologists with streamlining these types of administrative issues.

Friday, January 17, 2025

Men's Suicidal thoughts and behaviors and conformity to masculine norms: A person-centered, latent profile approach

Eggenberger, L., et al. (2024).
Heliyon, 10(20), e39094.

Abstract

Background

Men are up to four times more likely to die by suicide than women. At the same time, men are less likely to disclose suicidal ideation and transition more rapidly from ideation to attempt. Recently, socialized gender norms and particularly conformity to masculine norms (CMN) have been discussed as driving factors for men's increased risk for suicidal thoughts and behaviors (STBs). This study aims to examine the individual interplay between CMN dimensions and their association with depression symptoms, help-seeking, and STBs.

Methods

Using data from an anonymous online survey of 488 cisgender men, latent profile analysis was performed to identify CMN subgroups. Multigroup comparisons and hierarchical regression analyses were used to estimate differences in sociodemographic characteristics, depression symptoms, psychotherapy use, and STBs.

Results

Three latent CMN subgroups were identified: Egalitarians (58.6 %; characterized by overall low CMN), Players (16.0 %; characterized by patriarchal beliefs, endorsement of sexual promiscuity, and heterosexual self-presentation), and Stoics (25.4 %; characterized by restrictive emotionality, self-reliance, and engagement in risky behavior). Stoics showed a 2.32 times higher risk for a lifetime suicide attempt, younger age, stronger somatization of depression symptoms, and stronger unbearability beliefs.

Conclusion

The interplay between the CMN dimensions restrictive emotionality, self-reliance, and willingness to engage in risky behavior, paired with suicidal beliefs about the unbearability of emotional pain, may create a suicidogenic psychosocial system. Acknowledging this high-risk subgroup of men conforming to restrictive masculine norms may aid the development of tailored intervention programs, ultimately mitigating the risk for a suicide attempt.

Here are some thoughts:

Overall, the study underscores the critical role of social norms in shaping men's mental health and suicide risk. It provides valuable insights for developing targeted interventions and promoting healthier expressions of masculinity to prevent suicide in men.

This research article investigates the link between conformity to masculine norms (CMN) and suicidal thoughts and behaviors (STBs) in cisgender men. Using data from an online survey, the study employs latent profile analysis to identify distinct CMN subgroups, revealing three profiles: Egalitarians (low CMN), Players (patriarchal beliefs and promiscuity), and Stoics (restrictive emotionality, self-reliance, and risk-taking). Stoics demonstrated a significantly higher risk of lifetime suicide attempts, attributable to their CMN profile combined with beliefs about the unbearability of emotional pain. The study concludes that understanding CMN dimensions is crucial for developing targeted suicide prevention strategies for men.

Thursday, January 16, 2025

Faculty Must Protect Their Labor from AI Replacement

John Warner
Inside Higher Ed
Originally posted 11 Dec 24

Here is an excerpt:

A PR release from the UCLA Newsroom about a comparative lit class that is using a “UCLA-developed AI system” to substitute for labor that was previously done by faculty or teaching assistants lays out the whole deal. The course textbook has been generated from the professor’s previous course materials. Students will interact with the AI-driven courseware. A professor and teaching assistants will remain, for now, but for how long?

The professor argues—I would say rationalizes—that this is good for students because “Normally, I would spend lectures contextualizing the material and using visuals to demonstrate the content. But now all of that is in the textbook we generated, and I can actually work with students to read the primary sources and walk them through what it means to analyze and think critically.”

(Note: Whenever I see someone touting the benefit of an AI-driven practice as good pedagogy, I wonder what is stopping them from doing it without the AI component, and the answer is usually nothing.)

An additional apparent benefit is “that the platform can help professors ensure consistent delivery of course material. Now that her teaching materials are organized into a coherent text, another instructor could lead the course during the quarters when Stahuljak isn’t teaching—and offer students a very similar experience.”


This article argues that he survival of college faculty in an AI-driven world depends on recognizing themselves as laborers and resisting trends that devalue their work. The rise of adjunctification—prioritizing cheaper, non-tenured faculty over tenured ones—offers a cautionary tale. Similarly, the adoption of generative AI in teaching risks diminishing the human role in education. Examples like UCLA’s AI-powered courseware illustrate how faculty labor becomes interchangeable, paving the way for automation and eroding the value of teaching. Faculty must push back against policies, such as shifts in copyright, that enable these trends, emphasizing the irreplaceable value of their labor and resisting practices that jeopardize the future of academic teaching and learning.

Wednesday, January 15, 2025

AI Licensing for Authors: Who Owns the Rights and What’s a Fair Split?

The Authors Guild. (2024, December 13). 
The Authors Guild. 
Originally published 12 Dec 24

The Authors Guild believes it is crucial that authors, not publishers or tech companies, have control over the licensing of AI rights. Authors must be able to choose whether they want to allow their works to be used by AI and under what terms.

AI Training Is Not Covered Under Standard Publishing Agreements

A trade publishing agreement grants just that: a license to publish. AI training is not publishing, and a publishing contract does not in any way grant that right. AI training is not a new book format, it is not a new market, it is not a new distribution mechanism. Licensing for AI training is a right entirely unrelated to publishing, and is not a right that can simply be tacked onto a subsidiary-rights clause. It is a right reserved by authors, a right that must be negotiated individually for each publishing contract, and only if the author chooses to license that right at all.

Subsidiary Rights Do Not Include AI Rights

The contractual rights that authors do grant to publishers include the right to publish the book in print, electronic, and often audio formats (though many older contracts do not provide for electronic or audio rights). They also grant the publisher “subsidiary rights” authorizing it to license the book or excerpts to third parties in readable formats, such as foreign language editions, serials, abridgements or condensations, and readable digital or electronic editions. AI training rights to date have not been included as a subsidiary right in any contract we have been made aware of. Subsidiary rights have a range of “splits”—percentages of revenues that the publisher keeps and pays to the author. For certain subsidiary rights, such as “other digital” or “other electronic” rights (which some publishers have, we believe erroneously, argued gives them AI training rights), the publisher is typically required to consult with the author or get their approval before granting any subsidiary licenses.


Here are some thoughts:

The Authors Guild emphasizes that authors, not publishers or tech companies, should control AI licensing for their works. Standard publishing contracts don’t cover AI training, as it’s unrelated to traditional publishing rights. Authors retain copyright for AI uses and must negotiate these rights separately, ensuring they can approve or reject licensing deals. Publishers, if involved, should be fairly compensated based on their role, but authors should receive the majority—75-85%—of AI licensing revenues. The Guild also continues legal action against companies for past AI-related copyright violations, advocating for fair practices and author autonomy in this emerging market.

Tuesday, January 14, 2025

Agentic LLMs for Patient-Friendly Medical Reports

Sudarshan, M., Shih, S, et al. (2024).
arXiv.org

Abstract

The application of Large Language Models (LLMs) in healthcare is expanding rapidly, with one potential use case being the translation of formal medical re-ports into patient-legible equivalents. Currently, LLM outputs often need to be edited and evaluated by a human to ensure both factual accuracy and comprehensibility, and this is true for the above use case. We aim to minimize this step by proposing an agentic workflow with the Reflexion framework, which uses iterative self-reflection to correct outputs from an LLM. This pipeline was tested and compared to zero-shot prompting on 16 randomized radiology reports. In our multi-agent approach, reports had an accuracy rate of 94.94% when looking at verification of ICD-10 codes, compared to zero-shot prompted reports, which had an accuracy rate of 68.23%. Additionally, 81.25% of the final reflected reports required no corrections for accuracy or readability, while only 25% of zero-shot prompted reports met these criteria without needing modifications. These results indicate that our approach presents a feasible method for communicating clinical findings to patients in a quick, efficient and coherent manner whilst also retaining medical accuracy. The codebase is available for viewing at http://github.com/ malavikhasudarshan/Multi-Agent-Patient-Letter-Generation.


Here are some thoughts:

The article focuses on using Large Language Models (LLMs) in healthcare to create patient-friendly versions of medical reports, specifically in the field of radiology. The authors present a new multi-agent workflow that aims to improve the accuracy and readability of these reports compared to traditional methods like zero-shot prompting. This workflow involves multiple steps: extracting ICD-10 codes from the original report, generating multiple patient-friendly reports, and using a reflection model to select the optimal version.

The study highlights the success of this multi-agent approach, demonstrating that it leads to higher accuracy in terms of including correct ICD-10 codes and produces reports that are more concise, structured, and formal compared to zero-shot prompting. The authors acknowledge that while their system significantly reduces the need for human review and editing, it doesn't completely eliminate it. The article emphasizes the importance of clear and accessible medical information for patients, especially as they increasingly gain access to their own records. The goal is to reduce patient anxiety and confusion, ultimately enhancing their understanding of their health conditions.

Monday, January 13, 2025

Exposure to Higher Rates of False News Erodes Media Trust and Fuels Overconfidence

Altay, S., Lyons, B. A., & Modirrousta-Galian, A. (2024).
Mass Communication & Society, 1–25.
https://doi.org/10.1080/15205436.2024.2382776

Abstract

In two online experiments (N = 2,735), we investigated whether forced exposure to high proportions of false news could have deleterious effects by sowing confusion and fueling distrust in news. In a between-subjects design where U.S. participants rated the accuracy of true and false news, we manipulated the proportions of false news headlines participants were exposed to (17%, 33%, 50%, 66%, and 83%). We found that exposure to higher proportions of false news decreased trust in the news but did not affect participants’ perceived accuracy of news headlines. While higher proportions of false news had no effect on participants’ overall ability to discern between true and false news, they made participants more overconfident in their discernment ability. Therefore, exposure to false news may have deleterious effects not by increasing belief in falsehoods, but by fueling overconfidence and eroding trust in the news. Although we are only able to shed light on one causal pathway, from news environment to attitudes, this can help us better understand the effects of external or supply-side changes in news quality.


Here are some thoughts:

The study investigates the impact of increased exposure to false news on individuals' trust in media, their ability to discern truth from falsehood, and their confidence in their evaluation skills. The research involved two online experiments with a total of 2,735 participants, who rated the accuracy of news headlines after being exposed to varying proportions of false content. The findings reveal that higher rates of misinformation significantly decrease general media trust, independent of individual factors such as ideology or cognitive reflectiveness. This decline in trust may lead individuals to turn away from credible news sources in favor of less reliable alternatives, even when their ability to evaluate individual news items remains intact.

Interestingly, while participants displayed overconfidence in their evaluations after exposure to predominantly false content, their actual accuracy judgments did not significantly vary with the proportion of true and false news. This suggests that personal traits like discernment skills play a more substantial role than environmental cues in determining how individuals assess news accuracy. The study also highlights a disconnection between changes in media trust and evaluations of specific news items, indicating that attitudes toward media are often more malleable than actual behavior.

The research underscores the importance of understanding the psychological mechanisms at play when individuals encounter misinformation. It points out that interventions aimed at improving news discernment should consider the potential for increased skepticism rather than enhanced accuracy. Moreover, the findings suggest that exposure to high levels of false news can lead to overconfidence in one's ability to judge news quality, which may result in the rejection of accurate information.

Overall, the study provides credible evidence that exposure to predominantly false news can have harmful effects by eroding trust in media institutions and fostering overconfidence in personal judgment abilities. These insights are crucial for developing effective strategies to combat misinformation and promote healthy media consumption habits among the public.

Sunday, January 12, 2025

Large language models can outperform humans in social situational judgments

Mittelstädt, J. M.,  et al. (2024).
Scientific Reports, 14(1).

Abstract

Large language models (LLM) have been a catalyst for the public interest in artificial intelligence (AI). These technologies perform some knowledge-based tasks better and faster than human beings. However, whether AIs can correctly assess social situations and devise socially appropriate behavior, is still unclear. We conducted an established Situational Judgment Test (SJT) with five different chatbots and compared their results with responses of human participants (N = 276). Claude, Copilot and you.com’s smart assistant performed significantly better than humans in proposing suitable behaviors in social situations. Moreover, their effectiveness rating of different behavior options aligned well with expert ratings. These results indicate that LLMs are capable of producing adept social judgments. While this constitutes an important requirement for the use as virtual social assistants, challenges and risks are still associated with their wide-spread use in social contexts.

Here are some thoughts:

This research assesses the social judgment capabilities of large language models (LLMs) by administering a Situational Judgment Test (SJT), a standardized test for work or critical situation decisions, to five popular chatbots and comparing their performance to a human control group. The study found that several LLMs significantly outperformed humans in identifying appropriate behaviors in complex social scenarios. While LLMs demonstrated high consistency in their responses and agreement with expert ratings, the study notes limitations including potential biases and the need for further investigation into real-world application and the underlying mechanisms of their social judgment. The results suggest LLMs possess considerable potential as social assistants, but also highlight ethical considerations surrounding their use.

Saturday, January 11, 2025

LLM-based agentic systems in medicine and healthcare

Qiu, J., Lam, K., Li, G. et al.
Nat Mach Intell (2024).

Large language model-based agentic systems can process input information, plan and decide, recall and reflect, interact and collaborate, leverage various tools and act. This opens up a wealth of opportunities within medicine and healthcare, ranging from clinical workflow automation to multi-agent-aided diagnosis.

Large language models (LLMs) exhibit generalist intelligence in following instructions and providing information. In medicine, they have been employed in tasks from writing discharge summaries to clinical note-taking. LLMs are typically created via a three-stage process: first, pre-training using vast web-scale data to obtain a base model; second, fine-tuning the base model using high-quality question-and-answer data to generate a conversational assistant model; and third, reinforcement learning from human feedback to align the assistant model with human values and improve responses. LLMs are essentially text-completion models that provide responses by predicting words following the prompt. Although this next-word prediction mechanism allows LLMs to respond rapidly, it does not guarantee depth or accuracy of their outputs. LLMs are currently limited by the recency, validity and breadth of their training data, and their outputs are dependent on prompt quality. They also lack persistent memory, owing to their intrinsically limited context window, which leads to difficulties in maintaining continuity across longer interactions or across sessions; this, in turn, leads to challenges in providing personalized responses based on past interactions. Furthermore, LLMs are inherently unimodal. These limitations restrict their applications in medicine and healthcare, which often require problem-solving skills beyond linguistic proficiency alone.


Here are some thoughts:

Large language model (LLM)-based agentic systems are emerging as powerful tools in medicine and healthcare, offering capabilities that go beyond simple text generation. These systems can process information, make decisions, and interact with various tools, leading to advancements in clinical workflows and diagnostics. LLM agents are created through a three-stage process involving pre-training, fine-tuning, and reinforcement learning. They overcome limitations of standalone LLMs by incorporating external modules for perception, memory, and action, enabling them to handle complex tasks and collaborate with other agents. Four key opportunities for LLM agents in healthcare include clinical workflow automation, trustworthy medical AI, multi-agent-aided diagnosis, and health digital twins. Despite their potential, these systems also pose challenges such as safety concerns, bias amplification, and the need for new regulatory frameworks.

This development is important to psychologists for several reasons. First, LLM agents could revolutionize mental health care by providing personalized, round-the-clock support to patients, potentially improving treatment outcomes and accessibility. Second, these systems could assist psychologists in analyzing complex patient data, leading to more accurate diagnoses and tailored treatment plans. Third, LLM agents could automate administrative tasks, allowing psychologists to focus more on direct patient care. Fourth, the multi-agent collaboration feature could facilitate interdisciplinary approaches in mental health, bringing together insights from various specialties. Finally, the ethical implications and potential biases of these systems present new areas of study for psychologists, particularly in understanding how AI-human interactions may impact mental health and therapeutic relationships.

Friday, January 10, 2025

The Danger Of Superhuman AI Is Not What You Think

Shannon Vallor
Noema Magazine
Originally posted 23 May 24

Today’s generative AI systems like ChatGPT and Gemini are routinely described as heralding the imminent arrival of “superhuman” artificial intelligence. Far from a harmless bit of marketing spin, the headlines and quotes trumpeting our triumph or doom in an era of superhuman AI are the refrain of a fast-growing, dangerous and powerful ideology. Whether used to get us to embrace AI with unquestioning enthusiasm or to paint a picture of AI as a terrifying specter before which we must tremble, the underlying ideology of “superhuman” AI fosters the growing devaluation of human agency and autonomy and collapses the distinction between our conscious minds and the mechanical tools we’ve built to mirror them.

Today’s powerful AI systems lack even the most basic features of human minds; they do not share with humans what we call consciousness or sentience, the related capacity to feel things like pain, joy, fear and love. Nor do they have the slightest sense of their place and role in this world, much less the ability to experience it. They can answer the questions we choose to ask, paint us pretty pictures, generate deepfake videos and more. But an AI tool is dark inside.


Here are some thoughts:

This essay critiques the prevalent notion of superhuman AI, arguing that this rhetoric diminishes the unique qualities of human intelligence. The author challenges the idea that surpassing humans in task completion equates to superior intelligence, emphasizing the irreplaceable aspects of human consciousness, emotion, and creativity. The essay contrasts the narrow definition of intelligence used by some AI researchers with a broader understanding that encompasses human experience and values. Ultimately, the author proposes a future where AI complements rather than replaces human capabilities, fostering a more humane and sustainable society.