Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Friday, July 19, 2024

Turn a Kind Eye-Offering Positive Reframing

Menzin E. R. (2024).
JAMA internal medicine.
Advance online publication.

Here is an excerpt:

I have seen many patients struggle with anxiety and obsessive-compulsive disorder. I firmly believe in my obligation to connect them with evidence-based therapy and offer pharmacologic treatment. The cognitive behavioral therapy technique of reframing, which is so useful for treating anxiety disorders, can serve as a useful lens for the disorder itself.1 I try to reframe by pointing out their ability to see patterns and the strengths intrinsic to this nonneurotypical brain. Perhaps with this mindset, they can look at their behaviors with grace.

Physicians are problem solvers by nature and training. When faced with symptoms, we tend to go directly for the cure. When there are no or only suboptimal solutions, we tend to offer sympathy instead of strategy. Rather than apologize, we can reframe build the scaffolding to allow patients to change their thought patterns. As with all strategies, this is not universally applicable. You cannot positively reframe a life-threatening diagnosis; to do so insults and minimizes the patient's distress. There are times to sit with patients as their house crumbles, and there are times to help them reframe the chaos.

Recently, I saw this done in another unlikely corner. I took my 89-year-old father to discuss a shoulder replacement with the orthopedist. "Hang on," the surgeon exclaimed enthusiastically, holding his capable hands in the air. "Before we talk about the surgical options, you tore your rotator cuff skiing Killington at 86? That's amazing!" With that phrase, he reframed my father's injury from the frailty of old age to a badge of athletic honor (though he never saw my dad ski). It does not change my father's difficult decision to live with the tear or a grueling repair. Yet as he uses his right hand to lift his left arm, perhaps he will think of the 50 years of skiing or the feeling of fresh snow beneath his skis. Instead of feeling angry, I now watch him maneuver that arm and recall the family ski trips, the children and grandchildren he taught to ski. Sometimes, we all need kind eyes.

Here are some thoughts: 

The article explores the concept of well-being as a complex interplay between internal mental states and external socio-cultural factors. It proposes a holistic view, emphasizing the importance of both internal and external influences on happiness.

The article likely discusses strategies for positive reframing, which involves shifting negative interpretations of situations or experiences towards a more positive perspective. This reframing could be applied to both internal thoughts and emotions, as well as external circumstances.

"Turning a kind eye" is a metaphor for adopting a positive and understanding perspective towards oneself and one's environment, ultimately contributing to greater well-being.

Thursday, July 18, 2024

Far-right extremist groups show surging growth, new annual study shows

Will Carless
Originally published 7 June 24

Far-right extremist groups are actively working to undermine U.S. democracy and are organizing in record numbers, according to an annual report from the Southern Poverty Law Center. Meanwhile, extremist groups have been targeting faith-based groups that assist migrants on the U.S.-Mexico border, and a New Jersey state trooper is fired for having a racist tattoo.

It’s the week in extremism.

Far-right extremists suffered a blow in the wake of the Jan. 6 insurrection. More than 1,000 people were charged and key leaders were imprisoned, some for decades. But a new annual report from the Southern Poverty Law Center suggests the far-right has regrouped and is taking aim at democratic institutions across the country. 

The Year in Hate and Extremism from the Southern Poverty Law Center.

Here are some thoughts:

A new study highlighting the surge in far-right extremism holds significant weight for psychologists working with marginalized groups. This growth presents a heightened risk of threats and violence for these communities. Psychologists can play a vital role by understanding the vulnerabilities extremists prey on, fostering resilience in marginalized groups, and promoting social cohesion to counter extremist narratives. By acknowledging this trend, psychologists can equip themselves to better support the mental health of these vulnerable populations.

Wednesday, July 17, 2024

“I lost trust”: Why the OpenAI team in charge of safeguarding humanity imploded

By Sigal Samuel
Originally posted 18 May 24

For months, OpenAI has been losing employees who care deeply about making sure AI is safe. Now, the company is positively hemorrhaging them.

Ilya Sutskever and Jan Leike announced their departures from OpenAI, the maker of ChatGPT, on Tuesday. They were the leaders of the company’s superalignment team — the team tasked with ensuring that AI stays aligned with the goals of its makers, rather than acting unpredictably and harming humanity. 

They’re not the only ones who’ve left. Since last November — when OpenAI’s board tried to fire CEO Sam Altman only to see him quickly claw his way back to power — at least five more of the company’s most safety-conscious employees have either quit or been pushed out. 

What’s going on here?

If you’ve been following the saga on social media, you might think OpenAI secretly made a huge technological breakthrough. The meme “What did Ilya see?” speculates that Sutskever, the former chief scientist, left because he saw something horrifying, like an AI system that could destroy humanity. 

But the real answer may have less to do with pessimism about technology and more to do with pessimism about humans — and one human in particular: Altman. According to sources familiar with the company, safety-minded employees have lost faith in him.

Here are some thoughts:

The OpenAI team's reported issues expose critical ethical concerns in AI development. A potential misalignment of values emerges when profit or technological advancement overshadows safety and ethical considerations. Businesses must strive for transparency, prioritizing human well-being and responsible innovation throughout the development process.

Prioritizing AI Safety

The departure of the safety team underscores the need for robust safeguards. Businesses developing AI should dedicate resources to mitigating risks like bias and misuse. Strong ethical frameworks and oversight committees can ensure responsible development.

Employee Concerns and Trust

The article hints at a lack of trust within OpenAI. Businesses must foster open communication by addressing employee concerns about project goals, risks, and ethics. Respecting employee rights to raise ethical concerns is crucial for maintaining trust and responsible AI development.

By prioritizing ethical considerations, aligning values, and fostering transparency, businesses can navigate the complexities of AI development and ensure their creations benefit humanity.

Tuesday, July 16, 2024

Robust and interpretable AI-guided marker for early dementia prediction in real-world clinical settings

Lee, L. Y., et al. (2024).
EClinicalMedicine, 102725.


Predicting dementia early has major implications for clinical management and patient outcomes. Yet, we still lack sensitive tools for stratifying patients early, resulting in patients being undiagnosed or wrongly diagnosed. Despite rapid expansion in machine learning models for dementia prediction, limited model interpretability and generalizability impede translation to the clinic.


We build a robust and interpretable predictive prognostic model (PPM) and validate its clinical utility using real-world, routinely-collected, non-invasive, and low-cost (cognitive tests, structural MRI) patient data. To enhance scalability and generalizability to the clinic, we: 1) train the PPM with clinically-relevant predictors (cognitive tests, grey matter atrophy) that are common across research and clinical cohorts, 2) test PPM predictions with independent multicenter real-world data from memory clinics across countries (UK, Singapore).


Our results provide evidence for a robust and explainable clinical AI-guided marker for early dementia prediction that is validated against longitudinal, multicenter patient data across countries, and has strong potential for adoption in clinical practice.

Here is a summary and some thoughts:

Cambridge scientists have developed an AI tool capable of predicting with high accuracy whether individuals with early signs of dementia will remain stable or develop Alzheimer’s disease. This tool utilizes non-invasive, low-cost patient data such as cognitive tests and MRI scans to make its predictions, showing greater sensitivity than current diagnostic methods. The algorithm was able to correctly identify 82% of individuals who would develop Alzheimer’s and 81% of those who wouldn’t, surpassing standard clinical markers. This advancement could reduce the reliance on invasive and costly diagnostic tests and allow for early interventions, potentially improving treatment outcomes.

The machine learning model stratifies patients into three groups: those whose symptoms remain stable, those who progress slowly to Alzheimer’s, and those who progress rapidly. This stratification could help clinicians tailor treatments and closely monitor high-risk individuals. Validated with real-world data from memory clinics in the UK and Singapore, the tool demonstrates its applicability in clinical settings. The researchers aim to extend this model to other forms of dementia and incorporate additional data types, with the ultimate goal of providing precise diagnostic and treatment pathways, thereby accelerating the discovery of new treatments for dementia.

Monday, July 15, 2024

Behavioral attraction predicts morbidly curious women's mating interest in men with dark personalities

Khosbayar, A., Brown, M., & Scrivner, C. (2024).
Personality and Individual Differences, 228, 112738.


Morbid curiosity indexes interests in learning about dangerous phenomena. Individuals with high levels of dark triad traits (narcissism, psychopathy, and Machiavellianism) can be dangerous, implicating them as relatively desirable to those reporting heightened morbid curiosity. Despite the potential costs of high-dark triad men, it could benefit morbidly curious women to upregulate their preference for such men to satisfy short-term mating goals. This study tasked women to men exhibiting high and low levels of dark personality traits and complete a measure of trait morbid curiosity. Men described as exhibiting high levels of dark personality traits were more desirable as short-term mates than as long-term mates, although men described as reporting low levels of dark traits were more desirable overall. Morbidly curious women reported greater behavioral attraction toward dark-personality men but did not affective attraction. Findings suggest a function to morbidly curious women's interest in dark personalities.

Here are some thoughts:

This research sheds light on an intriguing pattern: women with a high level of morbid curiosity often exhibit a strong attraction to men with dark personality traits, such as narcissism, Machiavellianism, and psychopathy. This study highlights that behavioral attraction—initial intrigue and engagement—is a key predictor of these women’s mating interest in such men. For clinical psychologists, this insight is crucial in addressing the relational dynamics of their female patients who may be drawn to potentially harmful partners.

We play a pivotal role in helping these patients recognize and understand their attraction patterns. Through psychotherapy, we can assist patients in challenging and reframing their perceptions and decision-making processes regarding romantic interests. Emphasizing self-awareness and self-esteem can significantly reduce the allure of unhealthy relationships, enabling patients to set healthier boundaries and seek partners with positive traits.

Moreover, it's vital for therapists to educate their patients about the characteristics and risks associated with dark personality traits. By screening for attraction patterns during assessments and offering psychoeducation, psychologists can empower women to make safer relationship choices. Enhancing coping mechanisms and relationship skills are also critical strategies, providing patients with the tools to build healthy relationships and recognize red flags. Ultimately, these efforts can support women in navigating their romantic lives more safely and healthily.

Sunday, July 14, 2024

Happiness and well-being: Is it all in your head?Evidence from the folk

Kneer, M., & Haybron, D. M. (2024).


Despite a voluminous literature on happiness and well-being, debates have been stunted by persistent dissensus on what exactly the subject matter is. Commentators frequently appeal to intuitions about the nature of happiness or well-being, raising the question of how representative those intuitions are. In a series of studies, we examined lay intuitions involving happiness- and well-being-related terms to assess their sensitivity to internal (psychological) versus external conditions. We found that all terms, including ‘happy’, ‘doing well’ and ‘good life’, were far more sensitive to internal than external conditions, suggesting that for laypersons, mental states are the most important part of happiness and well-being. But several terms, including ‘doing well’, ‘good life’ and ‘enviable life’ were substantially more sensitive to external conditions than others, such as ‘happy’, consistent with dominant philosophical views of well-being. Interestingly, the expression ‘happy’ was completely insensitive to external conditions for about two thirds of our participants, suggesting a purely psychological concept among most individuals. Overall, our findings suggest that lay thinking in this domain divides between two concepts, or families thereof: a purely psychological notion of being happy, and one or more concepts equivalent to, or encompassing, the philosophical concept of well-being. In addition, being happy is dominantly regarded as just one element of well-being. These findings have considerable import for philosophical debates, empirical research and public policy.

The article is linked above.

Here are some thoughts:

The authors argue that while cognitive and emotional processes are crucial, happiness is also significantly shaped by external elements such as social relationships, economic conditions, and physical health. This perspective challenges the notion that happiness is solely an internal state, emphasizing the importance of environmental influences.

Cultural narratives play a pivotal role in shaping perceptions of happiness, as different societies prioritize various aspects of well-being. Collectivist cultures may value social harmony and community well-being, whereas individualist cultures emphasize personal achievement and autonomy. Additionally, the article explores how folk psychology—the intuitive beliefs people have about their own and others' mental states—affects how happiness is understood and pursued, highlighting the widespread belief in the power of mindset and attitudes.

The article also touches on the philosophical dimensions of happiness, questioning whether it is a static state or a dynamic process involving purpose, fulfillment, and engagement with life's challenges. This inquiry suggests that happiness is more than just a collection of pleasurable experiences, but rather a complex phenomenon that integrates internal and external factors. Overall, the article calls for a holistic understanding of well-being that acknowledges the intricate interplay between mental states and socio-cultural contexts.

Saturday, July 13, 2024

Can AI Understand Human Personality? -- Comparing Human Experts and AI Systems at Predicting Personality Correlations

Schoenegger, P., et al. (2024, June 12).


We test the abilities of specialised deep neural networks like PersonalityMap as well as general LLMs like GPT-4o and Claude 3 Opus in understanding human personality. Specifically, we compare their ability to predict correlations between personality items to the abilities of lay people and academic experts. We find that when compared with individual humans, all AI models make better predictions than the vast majority of lay people and academic experts. However, when selecting the median prediction for each item, we find a different pattern: Experts and PersonalityMap outperform LLMs and lay people on most measures. Our results suggest that while frontier LLMs' are better than most individual humans at predicting correlations between personality items, specialised models like PersonalityMap continue to match or exceed expert human performance even on some outcome measures where LLMs underperform. This provides evidence both in favour of the general capabilities of large language models and in favour of the continued place for specialised models trained and deployed for specific domains.

Here are some thoughts on the intersection of technology and psychology.

The research investigates how AI systems fare against human experts, including both laypeople and academic psychologists, in predicting correlations between personality traits.

The findings suggest that AI, particularly specialized deep learning models, may outperform individual humans in this specific task. This is intriguing, as it highlights the potential of AI to analyze vast amounts of data and identify patterns that might escape human intuition. However, it's important to remember that personality is a complex interplay of internal states, experiences, and environmental factors.

While AI may excel at recognizing statistical connections, it currently lacks the ability to grasp the underlying reasons behind these correlations.  A true understanding of personality necessitates the human capacity for empathy, cultural context, and consideration of individual narratives. In clinical settings, for instance, a skilled psychologist goes beyond identifying traits; they build rapport, explore the origin of these traits, and tailor interventions accordingly. AI, for now, remains a valuable tool for analysis, but it should be seen as complementary to, rather than a replacement for, human expertise in understanding the rich tapestry of human personality.

Friday, July 12, 2024

Why Scientific Fraud Is Suddenly Everywhere

Kevin T. Dugan
New York Magazine
Originally posted 21 May 24

Junk science has been forcing a reckoning among scientific and medical researchers for the past year, leading to thousands of retracted papers. Last year, Stanford president Marc Tessier-Lavigne resigned amid reporting that some of his most high-profile work on Alzheimer’s disease was at best inaccurate. (A probe commissioned by the university’s board of trustees later exonerated him of manipulating the data).

But the problems around credible science appear to be getting worse. Last week, scientific publisher Wiley decided to shutter 19 scientific journals after retracting 11,300 sham papers. There is a large-scale industry of so-called “paper mills” that sell fictive research, sometimes written by artificial intelligence, to researchers who then publish it in peer-reviewed journals — which are sometimes edited by people who had been placed by those sham groups. Among the institutions exposing such practices is Retraction Watch, a 14-year-old organization co-founded by journalists Ivan Oransky and Adam Marcus. I spoke with Oransky about why there has been a surge in fake research and whether fraud accusations against the presidents of Harvard and Stanford are actually good for academia.

I’ll start by saying that paper mills are not the problem; they are a symptom of the actual problem. Adam Marcus, my co-founder, had broken a really big and frightening story about a painkiller involving scientific fraud, which led to dozens of retractions. That’s what got us interested in that. There were all these retractions, far more than we thought but far fewer than there are now. Now, they’re hiding in plain sight.

Here are some thoughts:

Recent headlines might suggest a surge in scientific misconduct. However, it's more likely that increased awareness and stricter scrutiny are uncovering existing issues. From an ethical standpoint, the pressure to publish groundbreaking research can create a challenging environment. Publication pressure, coupled with the human tendency towards confirmation bias, can incentivize researchers to take unethical shortcuts that align data with their hypotheses. This can have a ripple effect, potentially undermining the entire scientific process.

Fortunately, the heightened focus on research integrity presents an opportunity for positive change. Initiatives promoting open science practices, such as data sharing and robust replication studies, can foster greater transparency. Furthermore, cultivating a culture that rewards ethical research conduct and whistleblowing, even in the absence of earth-shattering results, is crucial.  Science thrives on self-correction. By acknowledging these challenges and implementing solutions, the scientific community can safeguard the integrity of research and ensure continued progress.

Thursday, July 11, 2024

Harassment of scientists is surging — institutions aren’t sure how to help

Bianca Nogrady
Originally posted 21 May 24

As a vocal advocate of vaccinations for public health, Peter Hotez was no stranger to online harassment and threats. But then the abuse showed up on his doorstep.

It was a Sunday during a brutal Texas heatwave in June 2023 when a man turned up at Hotez’s home, filming himself as he shouted questions at the scientist, who is a paediatrician and virologist at Baylor College of Medicine in Houston, Texas.

Because of the long-running online and real-life abuse he has faced, Hotez now has the Texas Medical Center Police, Houston Police Department and Harris County Sheriff’s Office on speed dial, an agent tasked to him from the FBI and extra security whenever he speaks publicly.

“This is a very powerful adversarial force that is seeking to undermine science, and now it’s not only going after the science. It’s going after the scientists,” he says.

Hotez is an especially well-known scientist, but his experience is far from unique. Every day around the world, scientists are being abused and harassed online. They are being attacked on social media and by e-mail, telephone, letter and in person. And their reputations are being smeared with baseless accusations of misconduct. Sometimes, this escalates to real-world confrontations and attacks.

Here is my summary:

The article discusses a rise in harassment faced by scientists, particularly those doing research on hot-button topics like climate change. Universities and research institutions are struggling to develop effective ways to help these scientists.

Some scientists are targeted with online abuse and threats. Others fear repercussions within their field if they report harassment. This fear can silence important voices and discourage scientists from communicating their research.

The article highlights the debate about balancing safety with academic freedom. While some suggest limiting scientists' communication, others argue for better support systems and protection for researchers engaging in public outreach.

Wednesday, July 10, 2024

Honest Government Ad (aka PSA from John Connor)

Honest Government Ad - AI
July 2024

Note: Give me gallows humor that illuminates.  Since I post a great deal about the ethics, morality, and risk of AI, this seems appropriate. Enjoy!!

Tuesday, July 9, 2024

I like it because it hurts you: On the association of everyday sadism, sadistic pleasure, and victim blaming.

Sassenrath, C., et al. (2024).
Journal of Personality and Social Psychology,
126(1), 105–127.


Past research on determinants of victim blaming mainly concentrated on individuals’ just-world beliefs as motivational process underlying this harsh reaction to others’ suffering. The present work provides novel insights regarding underlying affective processes by showing how individuals prone to derive pleasure from others’ suffering—individuals high in everyday sadism—engage in victim blaming due to increased sadistic pleasure and reduced empathic concern they experience. Results of three cross-sectional studies and one ambulatory assessment study applying online experience sampling method (ESM; overall N = 2,653) document this association. Importantly, the relation emerged over and above the honesty–humility, emotionality, extraversion, agreeableness, conscientiousness, and openness personality model (Study 1a), and other so-called dark traits (Study 1b), across different cultural backgrounds (Study 1c), and also when sampling from a population of individuals frequently confronted with victim–perpetrator constellations: police officers (Study 1d). Studies 2 and 3 highlight a significant behavioral correlate of victim blaming. Everyday sadism is related to reduced willingness to engage in effortful cognitive activity as individuals high (vs. low) in everyday sadism recall less information regarding victim–perpetrator constellations of sexual assault. Results obtained in the ESM study (Study 4) indicate that the relation of everyday sadism, sadistic pleasure, and victim blaming holds in everyday life and is not significantly moderated by interpersonal closeness to the blamed victim or impactfulness of the incident. Overall, the present article extends our understanding of what determines innocent victims’ derogation and highlights emotional mechanisms, societal relevance, and generalizability of the observed associations beyond the laboratory.

The research discusses the phenomenon of victim blaming - the tendency to blame innocent victims for their misfortunes - and explores the role of everyday sadism as a potential determinant. The key points are:
  1. Victim blaming is a prevalent reaction when confronted with others' suffering, often explained by the belief in a just world where people get what they deserve. 
  2. However, recent research has challenged just-world explanations, suggesting emotional reactions play a role in victim blaming. 
  3. The text proposes that individuals high in everyday sadism - the tendency to derive pleasure from others' suffering - are more likely to engage in victim blaming due to experiencing sadistic pleasure and lacking empathic concern. 
  4. Everyday sadism is distinct from other "dark" personality traits like psychopathy and is uniquely associated with dehumanization, moral disengagement, and aggressive behavior. 
  5. The research aims to establish the link between everyday sadism and victim blaming across various contexts, including non-Western samples, and explore its association with reduced willingness to help victims. 
  6. Multiple cross-sectional and experience sampling studies are reported to investigate these hypotheses while controlling for just-world beliefs and other relevant factors. 

Monday, July 8, 2024

Fake beauty queens charm judges at the Miss AI pageant

Chloe Veltman
Originally posted 9 June 24

Here is an excerpt:

But in the real world, beauty pageants are fading. They are no longer the giant cultural draw they once were, attracting tens of millions of TV viewers during their peak in the 1970s and '80s.

The events are controversial, because there’s a long history of them feeding into harmful stereotypes of women. 

Indeed, all 10 Miss AI finalists fit in with traditional beauty queen tropes: They all look young, buxom and thin.

The controversial nature of pageants, coupled with the application of cutting-edge AI technology, is proving to be catnip for the media and the public. Simply put, sexy images of fake women are an easy way to connect with fans.

"With this technology, we're very much in the early stages, where I think this is the perfect type of content that's highly engaging and super low hanging fruit to go after, said Eric Dahan, CEO of the social media marketing company Mighty Joy.

In an interview with NPR, beauty pageant historian and Miss AI judge Sally-Ann Fawcett said she hopes to be able to change these stereotypes "from the inside" by focusing her judging efforts on the messaging around these AI beauty queens — and not just on their looks.


Here are some thoughts:

While the use of AI to create realistic human models is technologically impressive, its application in a beauty pageant context is concerning. It reinforces the idea that a woman's worth is primarily based on her physical appearance, which can have negative psychological impacts, especially on young girls and women. The technology could be better utilized to promote more positive and inclusive representations of beauty and human diversity.

I would urge the organizers and participants of the Miss AI pageant to critically reflect on the potential harm their actions may cause. They should strive to use this powerful technology in a more responsible and socially conscious manner, challenging rather than reinforcing harmful stereotypes and objectification. Promoting diverse and inclusive representations of beauty would be a more ethical and psychologically healthy approach.

Sunday, July 7, 2024

Impulsivity in fatal suicide behaviour: A systematic review and meta-analysis of psychological autopsy studies

Sanz-Gómez, S., et al. (2024).
Psychiatry research, 337, 115952. 
Advance online publication.


Our aim is to review and perform a meta-analysis on the role of impulsivity in fatal suicide behaviour. We included papers who used psychological autopsy methodology, assessed adult death by suicide, and included assessment of impulsivity. We excluded papers about assisted suicide, terrorist suicide, or other cause of death other than suicide or postmortem diagnosis made only from medical records or database. 97 articles were identified. 33 were included in the systematic review and nine in the meta-analysis. We found that people who die by suicide with high impulsivity are associated with younger age, substance abuse, and low intention to die, whereas those with low impulsivity were associated with older age, depression, schizophrenia, high intention to die and low social support. In the meta-analysis, suicide cases had higher impulsivity scores than living controls (Hedges' g = 0.59, 95 % CI [0.28, 0.89], p=.002). However, studies showed heterogeneity (Q = 90.86, p<.001, I2=89.0 %). Impulsivity-aggressiveness interaction was assessed through meta-regression (β=0.447, p=.045). Individuals with high impulsivity would be exposed to a higher risk of fatal suicide behaviour, aggressiveness would play a mediating role. People who die by suicide with high and low impulsivity display distinct characteristics, which may reflect different endophenotypes leading to suicide by different pathways.

Here is the conclusion.

This systematic review has shed light on the role of impulsivity on fatal suicide behaviour. This topic has been subject to less attention than impulsivity in other behaviours of the suicidal spectrum, mostly due to the methodological barriers that it entails. We found that impulsivity as a trait plays a role in deaths by suicide. Individuals with high impulsivity traits who die by suicide exhibit distinct characteristics such as younger age, substance abuse and low intent to die, whereas non-impulsive people who die by suicide tend to be older age, experience depression or schizophrenia and have high intent to die. Social support is not a protective factor for death by suicide in people with high impulsivity which poses challenges for suicide prevention in this population.

Saturday, July 6, 2024

We built this culture (so we can change it): Seven principles for intentional culture change.

Hamedani, M. G., et al. (2024).
American Psychologist, 79(3), 384–402.


Calls for culture change abound. Headlines regularly feature calls to change the “broken” or “toxic” cultures of institutions and organizations, and people debate which norms and practices across society are now defunct. As people blame current societal problems on culture, the proposed fix is “culture change.” But what is culture change? How does it work? Can it be effective? This article presents a novel social psychological framework for intentional culture change—actively and deliberately modifying the mutually reinforcing features of a culture. Synthesizing insights from research and application, it proposes an integrated, evidence-based perspective centered around seven core principles for intentional culture change: Principle 1: People are culturally shaped shapers, so they can be culture changers; Principle 2: Identifying, mapping, and evaluating the key levels of culture helps locate where to target change; Principle 3: Culture change happens in both top-down and bottom-up ways and is more effective when the levels are in alignment; Principle 4: Culture change can be easier when it leverages existing core values and harder when it challenges deep-seated defaults and biases; Principle 5: Culture change typically involves power struggles and identity threats; Principle 6: Cultures interact with one another and change can cause backlash, resistance, and clashes; and Principle 7: Timing and readiness matter. While these principles may be broadly used, here they are applied to the issue of social inequality in the United States. Even though culture change feels particularly daunting in this problem area, it can also be empowering—especially when people leverage evidence-based insights and tools to reimagine and rebuild their cultures.

Public Significance Statement

Calls for culture change abound. Headlines regularly feature calls to change the “broken” or “toxic” cultures of the police, the workplace, U.S. politics, and more, and norms and practices across society are hotly debated. The proposed fix is “culture change.” But what is culture change? How does it work? And can it be effective? This article presents an emerging social psychological framework for intentional culture change, with a focus on behavioral change and addressing societal disparities in the United States.


Here are some thoughts:

People as Changemakers:  The concept of "culturally shaped shapers" can be empowering to everyone in the organization, not just a top-down approach to leadership. It emphasizes that everyone, from executives to frontline employees, has the power to influence the culture.  This empowers individuals and fosters a sense of ownership.

Multi-Level Mapping:  The idea of mapping the different cultural levels (individual, team, organizational, societal) is insightful. By understanding these interconnected layers, leaders can pinpoint areas for focused intervention and ensure their efforts have a cascading effect.

The Power of Alignment:  The emphasis on both top-down and bottom-up approaches is crucial. When leadership aspirations align with employee experiences,  genuine cultural change flourishes. Leaders who actively listen and incorporate employee voices create a sense of trust and shared purpose.

Leveraging Values:  Building upon existing core values is a smart strategy.  Change feels less disruptive when it complements established principles. However, confronting deep-seated biases is also important. Leaders need the courage to address outdated norms that may be hindering progress.

The Inevitability of Conflict:  The acknowledgment of power struggles and identity threats during cultural change is an important reminder.  Leaders should prepare for resistance and be open to navigating these challenges constructively. Transparency and open communication are key.

The Ripple Effect:  The highlight that cultures interact is valuable. Leaders must consider how their organization's culture interacts with external forces,  like industry norms or societal shifts. This awareness can help them anticipate potential challenges and opportunities.

Timing is Everything:  The importance of timing and readiness resonates deeply.  Leaders need to assess their organization's receptiveness to change and tailor their approach accordingly.  Forcing change before the groundwork is laid can backfire.

Friday, July 5, 2024

Future You: A Conversation with an AI-Generated Future Self Reduces Anxiety, Negative Emotions, and Increases Future Self-Continuity

Pataranutaporn, P., et al. (2024, May 21).


We introduce "Future You," an interactive, brief, single-session, digital chat intervention designed to improve future self-continuity--the degree of connection an individual feels with a temporally distant future self--a characteristic that is positively related to mental health and wellbeing. Our system allows users to chat with a relatable yet AI-powered virtual version of their future selves that is tuned to their future goals and personal qualities. To make the conversation realistic, the system generates a "synthetic memory"--a unique backstory for each user--that creates a through line between the user's present age (between 18-30) and their life at age 60. The "Future You" character also adopts the persona of an age-progressed image of the user's present self. After a brief interaction with the "Future You" character, users reported decreased anxiety, and increased future self-continuity. This is the first study successfully demonstrating the use of personalized AI-generated characters to improve users' future self-continuity and wellbeing.

Limitations and Ethical Considerations

Our work opens new possibilities for AI-powered, inter-active future self interventions, but there are limitations to address. Future research should: directly compare our FutureYou intervention with other validated interventions; examine the longitudinal effects of using the Future You platform; leverage more sophisticated ML models to potentially increase realism; and consider how interacting with a future self might reconstruct personal decisions as interpersonal ones between present and future selves as a psychological mechanism that explains treatment effects. Potential misuses of AI-generated future selves to be mindful of include: inaccurately depicting the future in a way that harmfully influences present behavior; endorsing negative behaviors; and hyper-personalization that reduces real human relationships and adversely impacts health. These challenges are part of a broader conversation on the ethics of human-AI interaction and AI-generated media happening at both personal and policy levels. Researchers must further investigate and ensure the ethical use of this technology.

Here are some thoughts:

Promise and Potential:

The concept of feeling connected to your future self (future self-continuity) is crucial for mental well-being. "Future You" could be a powerful tool to bridge that gap, leading to better decision-making and reduced anxiety about the unknown. Tailoring the AI-generated future self to the user's goals and personality is key. This personal touch can foster a sense of believability and connection.

Points to Consider:

*AI biases could unintentionally influence the future self's persona. We need to ensure the system promotes a healthy and realistic vision of the future, not a distorted one.

*Not everyone has access to such technology. It's important to consider how to make this tool widely available and address potential biases based on socioeconomic factors.

*While "Future You" can be a valuable tool, it shouldn't replace critical thinking and personal agency. People should still be empowered to make their own choices.

Thursday, July 4, 2024

Pentagon data reveals US soldier more likely to die by suicide than in combat

Tom Vanden Brook
USA Today
Originally posted 12 June 24

U.S. soldiers were almost nine times more likely to die by suicide than by enemy fire, according to a Pentagon study for the five-year period ending in 2019.

The study, published in May by the Defense Health Agency, found that suicide was the leading cause of death among active-duty soldiers from 2014 to 2019. There were 883 suicide deaths during that time period. Accidents were the No. 2 cause with 814 deaths. There were 96 combat deaths.

The suicide figures from 2019 predate some Army and Pentagon initiatives to combat suicide, including a workforce that addresses harmful behaviors like alcohol abuse that can contribute to deaths by suicide. In addition, combat deaths declined from 31 in 2014 to 16 in 2019 as deployments to war zones in the Middle East and Afghanistan decreased.

Suicide, meanwhile, has increased among active-duty soldiers, according to figures obtained by USA TODAY. So far in 2024, 55 soldiers have died by suicide.

Army officials, in an interview with USA TODAY, pointed to increasing rates of suicide in U.S. society as whole that are reflected in their ranks. They also talked about new tactics they're using to reduce suicide.

Here are some comments:

A recent Pentagon study revealed a shocking truth: active-duty US soldiers are far more likely to die by suicide than in combat. This data exposes a hidden mental health crisis within the military community.

The stresses of combat and the challenges of reintegration into civilian life can have a devastating impact. To address this, we need a cultural shift. Seeking help for mental health struggles must be seen as a sign of strength, not weakness.

The solution requires a multi-pronged approach. We need to prioritize readily available mental health services, address substance abuse issues, and strengthen social support networks within the military.  Most importantly, we need to ensure soldiers are equipped to handle the psychological challenges they face, both during and after service.

Let's not forget - suicide is preventable. By raising awareness, reducing stigma, and providing effective resources, we can support our soldiers and ensure they get the help they deserve.

Wednesday, July 3, 2024

Fake therapist fooled hundreds online until she died, state records say

Brett Kelman
CBS Health Watch
Originally posted 2 July 24

Hundreds of Americans may have unknowingly received therapy from an untrained impostor who masqueraded as an online therapist, possibly for as long as two years, and the deception crumbled only when she died, according to state health department records.

Peggy A. Randolph, a social worker who was licensed in Florida and Tennessee and formerly worked for Brightside Health, a nationwide online therapy company, is accused of helping her wife impersonate her in online sessions, according to an investigation report from the Florida Department of Health.

The Florida report says the couple "defrauded" patients through a "coordinated effort": As Randolph treated patients in person, her wife pretended to be her in telehealth sessions with Brightside patients. The deceit was discovered after the wife died last year and a patient realized they'd been talking to the wrong person, according to a Tennessee Department of Health settlement agreement.

Records from both states identify Randolph's wife only by her initials, T.R., but her full name is in her obituary: Tammy G. Heath-Randolph. Therapists are generally expected to have at least a master's degree, but Randolph's wife was "not licensed or trained to provide any sort of counseling services," according to the Tennessee agreement.

Here are some thoughts:

This case of an impostor therapist masquerading as a licensed professional in online therapy sessions raises numerous ethical, healthcare, and psychotherapy concerns. The most obvious issues include the severe breach of trust between therapist and patient, the potential harm caused to vulnerable individuals seeking mental health support, and the serious violations of patient privacy.  The incident also highlights the critical importance of proper licensing and credentialing in healthcare, especially in telehealth settings.

This case also reveals less apparent but equally significant problems. It exposes potential vulnerabilities in telehealth systems, particularly in verifying the identity of online therapists, suggesting a need for more robust authentication methods.

The alleged involvement of the therapist's wife introduces complex ethical dilemmas regarding personal relationships in professional healthcare contexts. Furthermore, the fact that this deception went unnoticed for an extended period might indicate systemic issues such as therapist burnout or inadequate oversight in the mental health field. The case also demonstrates the challenges in regulating and monitoring telehealth services that operate across multiple states.

Interestingly, this real-life impostor scenario could potentially exacerbate feelings of imposter syndrome among both genuine therapists and patients. The posthumous discovery of the deception presents unique challenges in addressing the harm caused and seeking appropriate resolutions.

Lastly, the financial aspect of this case, where compensation was received for fraudulent sessions, raises important questions about the potential for monetary incentives to compromise ethical standards in healthcare. This incident underscores the urgent need for stronger safeguards in telehealth, improved oversight mechanisms, and a renewed focus on maintaining the integrity of the therapist-patient relationship in the evolving landscape of digital healthcare.

Tuesday, July 2, 2024

Can a Robot Lie? Exploring the Folk Concept of Lying as Applied to Artificial Agents

Kneer, Markus (2021).
Cognitive Science, 45(10), e13032


The potential capacity for robots to deceive has received considerable attention recently. Many papers explore the technical possibility for a robot to engage in deception for beneficial purposes (e.g., in education or health). In this short experimental paper, I focus on a more paradigmatic case: robot lying (lying being the textbook example of deception) for nonbeneficial purposes as judged from the human point of view. More precisely, I present an empirical experiment that investigates the following three questions: (a) Are ordinary people willing to ascribe deceptive intentions to artificial agents? (b) Are they as willing to judge a robot lie as a lie as they would be when human agents engage in verbal deception? (c) Do people blame a lying artificial agent to the same extent as a lying human agent? The response to all three questions is a resounding yes. This, I argue, implies that robot deception and its normative consequences deserve considerably more attention than they presently receive.


In a preregistered experiment, I explored the folk concept of lying for both human agents and robots. Consistent with previous findings for human agents, the majority of participants think that it is possible to lie with a true claim, and hence in cases where there is no actual deception. What seems to matter more for lying are intentions to deceive. Contrary to what might have been expected, intentions of this sort are equally ascribed to robots as to humans. It thus comes as no surprise that robots are judged as lying, and blameworthy for it, to similar degrees as human agents. Future work in this area should attempt to replicate these findings manipulating context and methodology. Ethicists and legal scholars should explore whether, and to what degree, it might be morally appropriate and legally necessary to restrict the use of deceptive artificial agents.

Here is a summary:

This research dives into whether people perceive robots as capable of lying. The study investigates the concept of lying and its application to artificial intelligence (AI) through experiments. Kneer explores if humans ascribe deceitful intent to robots and judge their deceptions as harshly as human lies. The findings suggest that people are likely to consider robots capable of lying and hold them accountable for deception. The study argues that this necessitates further exploration of the ethical implications of robot deception in our interactions with AI.

Monday, July 1, 2024

Effects of intellectual humility in the context of affective polarization: Approaching and avoiding others in controversial political discussions

Knöchelmann, L., & Cohrs, J. C. (2024).
Journal of personality and social psychology, 
Advance online publication.


Affective polarization, the extent to which political actors treat each other as disliked outgroups, is challenging political exchange and deliberation, for example, via mistrust of the "political enemy" and unwillingness to discuss political topics with them. The present experiments address this problem and study what makes people approach, and not avoid, potential discussion partners in the context of polarized political topics in Germany. We hypothesized that intellectual humility, the recognition of one's intellectual limitations, would predict both less affective polarization and higher approach and lower avoidance tendencies toward contrary-minded others. Across four preregistered online-survey experiments (N = 1,668), we manipulated how intellectually humble a target person was perceived and measured participants' self-reported (topic-specific) intellectual humility. Results revealed that participants' intellectual humility was consistently negatively correlated with affective polarization. Additionally, intellectual humility of both the target person and the participants was beneficial, and sometimes even necessary, to make participants approach, and not avoid, the target person. Intellectual humility was more important than moral conviction, opinion, and opinion strength. Furthermore, the effects on approach and avoidance were mediated by more positive expectations regarding the debate, and the effects on future willingness for contact by higher target liking. Our findings suggest that intellectual humility is an important characteristic to enable political exchange as it leads to seeing political outgroups more positively and to a higher willingness to engage in intergroup contact. Implications for intergroup contact of political groups as well as ideas for future research are discussed.

Here are some thoughts as to why intellectual humility is important to practicing psychologists:

Intellectual humility involves recognizing the limits of one's knowledge and being open to opposing perspectives.  This can help therapists avoid being overly dogmatic or dismissive of their clients' beliefs, even if they disagree. Maintaining an open and non-judgmental stance is crucial for building a strong therapeutic alliance.

People higher in intellectual humility tend to be more empathetic, forgiving, and valuing of others' wellbeing.  These qualities can facilitate better rapport and understanding between therapists and clients from different backgrounds or with contrasting worldviews.

Intellectual humility is associated with reduced polarization, extremism, and susceptibility to conspiracy beliefs.  Therapists exhibiting intellectual humility can model these qualities for clients struggling with rigid ideological stances that strain their relationships.

When political or controversial topics arise in therapy, intellectual humility can allow therapists to thoughtfully consider different perspectives without getting mired in unproductive debates or power struggles with clients.  An intellectually humble stance creates space for productive dialogue.

Overall, cultivating intellectual humility may help therapists navigate affective polarization and controversial topics more constructively in the therapeutic context by increasing openness, empathy, and willingness to entertain alternative viewpoints.  This can strengthen the therapeutic relationship and facilitate progress, even when working with clients holding contrasting beliefs.

Sunday, June 30, 2024

Reddit Provides Insight into How People Think About Moral Dilemmas

Sigal Samuel
Vox: Future Perfect
Undated post

Here is a sample:

Uncovering philosophy’s blind spots 

Let’s get a bit more precise: It’s not as though all of philosophy has ignored relational context. But one branch — utilitarianism — is strongly inclined in this direction. Utilitarians believe we should seek the greatest happiness for the greatest number of people — and we have to consider everybody’s happiness equally. So we’re not supposed to be partial to our own friends or family members. 

This ethical approach took off in the 18th century. Today, it’s extremely influential in Western philosophy — and not just in the halls of academia. Famous philosophers like Peter Singer have popularized it in the public sphere, too. 

Increasingly, though, some are challenging it. 

“Moral philosophy has for so long been about trying to identify universal moral principles that apply to all people regardless of their identity,” Yudkin told me. “And it’s because of this effort that moral philosophers have really moved away from the relational perspective. But the more that I think about the data, the more clear to me it is that you’re losing something essential from the moral equation when you abstract away from relationships.” 

Moral psychologists like Princeton’s Molly Crockett and Yale’s Margaret Clark have likewise been investigating the idea that moral obligations are relationship-specific.

“Here’s a classic example,” Crockett told me a few years ago. “Consider a woman, Wendy, who could easily provide a meal to a young child but fails to do so. Has Wendy done anything wrong? It depends on who the child is. If she’s failing to provide a meal to her own child, then absolutely she’s done something wrong! But if Wendy is a restaurant owner and the child is not otherwise starving, then they don’t have a relationship that creates special obligations prompting her to feed the child.”

According to Crockett, being a moral agent has become trickier for us with the rise of globalization, which forces us to think about how our actions might affect people we’re never going to meet. “Being a good global citizen now butts up against our very powerful psychological tendencies to prioritize our families and friends,” Crockett told me.

Here is my summary:

Reddit Provides Insight into How People Think About Moral Dilemmas
  • Philosophers Daniel Yudkin and colleagues analyzed millions of comments from Reddit's "Am I the Asshole?" forum to study how ordinary people reason about moral dilemmas in real life situations.
  • They found the most common dilemmas involved "relational obligations" - what we owe to others based on our relationships with them, like family, friends, coworkers etc.
  • The types of moral dilemmas people faced varied based on the specific relationship context (e.g. with a sibling vs. manager).
Challenging the Impartiality of Utilitarianism
  • This challenges the utilitarian view in philosophy that we should impartially maximize happiness for everyone equally, ignoring special relationships.
  • Some argue this impartial view overlooks the deep psychological importance of prioritizing close relations like family over strangers.
  • While impartiality may be an ideal, critics say it is psychologically unrealistic to expect people to abandon loved ones to help larger numbers of strangers.
  • The research highlights how modern moral philosophy, especially utilitarianism, may fail to account for the central role relationships and social contexts play in ordinary moral reasoning and obligations.
As others have said better than me, moral norms and principles provide a shared framework for evaluating right and wrong behavior. They define obligations and duties we have towards others, especially those close to us. By adhering to moral codes, individuals can build trust, reciprocity, and a sense of fairness in their relationships.

The expression of moral judgments, both positive and negative, helps regulate self-interest and enforce cooperative norms within groups. When people can call out immoral actions and praise ethical conduct, it incentivizes prosocial behavior and discourages free-riding. This promotes cooperation for mutual benefit.

Saturday, June 29, 2024

OpenAI insiders are demanding a “right to warn” the public

Sigal Samuel
Originally posted 5 June 24

Here is an excerpt:

To be clear, the signatories are not saying they should be free to divulge intellectual property or trade secrets, but as long as they protect those, they want to be able to raise concerns about risks. To ensure whistleblowers are protected, they want the companies to set up an anonymous process by which employees can report their concerns “to the company’s board, to regulators, and to an appropriate independent organization with relevant expertise.” 

An OpenAI spokesperson told Vox that current and former employees already have forums to raise their thoughts through leadership office hours, Q&A sessions with the board, and an anonymous integrity hotline.

“Ordinary whistleblower protections [that exist under the law] are insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated,” the signatories write in the proposal. They have retained a pro bono lawyer, Lawrence Lessig, who previously advised Facebook whistleblower Frances Haugen and whom the New Yorker once described as “the most important thinker on intellectual property in the Internet era.”

Here are some thoughts:

AI development is booming, but with great power comes great responsibility, typed the Spiderman fan.  AI researchers at OpenAI are calling for a "right to warn" the public about potential risks. In clinical psychology, we have a "duty to warn" for violent patients. This raises important ethical questions. On one hand, transparency and open communication are crucial for responsible AI development.  On the other hand, companies need to protect their ideas.  The key seems to lie in striking a balance.  Researchers should have safe spaces to voice concerns without fearing punishment, and clear guidelines can help ensure responsible disclosure without compromising confidential information.

Ultimately, fostering a culture of open communication is essential to ensure AI benefits society without creating unforeseen risks.  AI developers need similar ethical guidelines to psychologists in this matter.

Friday, June 28, 2024

Becoming a culturally responsive and socially just clinical supervisor

Spowart, J. K. P., & Robertson, S. E. (2024).
Canadian Psychology / Psychologie canadienne.
Advance online publication. 


Clinical supervisors must learn to attend to and address a breadth of cultural, diversity and social justice factors and dynamics when providing supervision. Developing these abilities does not occur automatically; rather, training in clinical supervision has a significant impact on supervisors’ development. Unfortunately, there is relatively limited research on how supervisors develop these same ways of being and working. Therefore, the purpose of this study was to explore how counselling psychology doctoral students understand their experiences of becoming culturally responsive and socially just clinical supervisors. Findings from this study detail the developmental experiences of novice supervisors and highlight training needs, educational interventions, progression of competencies and experiences with counselling supervisees and supervisors-of-supervision. Implications for theories of supervisor development and approaches in graduate training programmes are discussed along side of calls to more robustly integrate culturally responsive and socially just training and approaches throughout the field of clinical supervision.

Impact Statement

Clinical supervisors are responsible for attending to and addressing issues of culture, diversity and advocacy so that they may better prepare new mental health practitioners to support populations from diverse backgrounds. Little is known about the training experiences and needs of clinical supervisors as they learn to carry out this important work. The present study addresses this gap in the literature by highlighting the experiences of supervisors-in-training and provides tangible education and training recommendations to help ensure more culturally responsive and socially just clinical supervision practices.

Here are two excerpts:

From the Introduction:

Clinical supervision is a distinct area of practice in psychology (Arthur & Collins, 2015). Historically, it was assumed that becoming a clinical supervisor was "a natural outgrowth of the acquisition of [counselling] experience" (Thériault & Gazzola, 2019, p. 155). Currently, it is recognised that becoming a clinical supervisor is a unique, complex and multifaceted developmental process in which distinct skills, knowledge, awareness and attitudes must be cultivated (Falender & Shafranske, 2017; Thériault & Gazzola, 2019). Adding to this, providing supervision alone does not guarantee supervisor development or the acquisition of clinical supervision competencies (Falender & Shafranske, 2004; C. E. Watkins, 2012). Rather, training in clinical supervision has been shown to have a significant impact on development as a supervisor (Christofferson et al., 2023; Gazzola & De Stefano, 2016; Milne et al., 2011). Individuals may obtain such training either during graduate school or through postgraduate professional development.

From the Discussion:

To begin, the importance of MCSJ (Multicultural Social Justice) factors and dynamics served as a context for the doctoral student SITs' (Supervisors In Training) experiences. As if it were a lens through which they understood their practice and development, their focus on MCSJ factors and dynamics was not something that could be divorced from their experiences. As they were transitioning into and taking on their new role, the SITS experienced some initial difficulties. At first, they felt they needed a road map. They did not have a clear understanding of how they could provide CRSJ supervision and wished they had received more initial guidance. Some of these initial difficulties abated as the doctoral student SITS were impacted by a number of supports to their development.

Thursday, June 27, 2024

When Therapists Lose Their Licenses, Some Turn to the Unregulated Life Coaching Industry Instead

Jessica Miller
Salt Lake Tribune
Originally published 17 June 24

A frustrated woman recently called the Utah official in charge of professional licensing, upset that his office couldn’t take action against a life coach she had seen. Mark Steinagel recalls the woman telling him: “I really think that we should be regulating life coaching. Because this person did a lot of damage to me.”

Reports about life coaches — who sell the promise of helping people achieve their personal or professional goals — come into Utah’s Division of Professional Licensing about once a month. But much of the time, Steinagel or his staff have to explain that there’s nothing they can do.

If the woman had been complaining about any of the therapist professions overseen by DOPL, Steinagel’s office might have been able to investigate and potentially order discipline, including fines.

But life coaches aren’t therapists and are mostly unregulated across the United States. They aren’t required to be trained in ethical boundaries the way therapists are, and there’s no universally accepted certification for those who work in the industry.

Here are some thoughts on the ethics of this trend:

The trend of therapists who have lost their licenses transitioning to the unregulated life coaching industry raises significant ethical concerns and risks. This shift allows individuals who have been deemed unfit to practice therapy to continue working with vulnerable clients without oversight or accountability. The lack of regulation in life coaching means that these practitioners can potentially continue harmful behaviors, misrepresent their qualifications, and exploit clients without facing the same consequences they would in the regulated therapy field.

This situation poses substantial risks to clients (and the integrity of coaching as profession). Clients seeking help may not understand the difference between regulated therapy and unregulated life coaching, potentially exposing themselves to practitioners who have previously violated ethical standards. The presence of discredited therapists in the life coaching industry can erode public trust in mental health services and coaching alike, potentially deterring individuals from seeking necessary help. Moreover, clients have limited legal recourse if they are harmed by an unregulated life coach, leaving them vulnerable to financial and emotional distress.

To address these concerns, there is a pressing need for regulatory measures in the life coaching industry, particularly concerning practitioners with a history of ethical violations in related fields. Such regulations could help maintain the integrity of coaching, protect vulnerable clients, and ensure that those seeking help receive services from qualified and ethical practitioners. Without such measures, the potential for harm remains significant, undermining the valuable work done by ethical professionals in both therapy and life coaching.

Wednesday, June 26, 2024

Can Generative AI improve social science?

Bail, C. A. (2024).
Proceedings of the National Academy of
Sciences of the United States of America, 121(21). 


Generative AI that can produce realistic text, images, and other human-like outputs is currently transforming many different industries. Yet it is not yet known how such tools might influence social science research. I argue Generative AI has the potential to improve survey research, online experiments, automated content analyses, agent-based models, and other techniques commonly used to study human behavior. In the second section of this article, I discuss the many limitations of Generative AI. I examine how bias in the data used to train these tools can negatively impact social science research—as well as a range of other challenges related to ethics, replication, environmental impact, and the proliferation of low-quality research. I conclude by arguing that social scientists can address many of these limitations by creating open-source infrastructure for research on human behavior. Such infrastructure is not only necessary to ensure broad access to high-quality research tools, I argue, but also because the progress of AI will require deeper understanding of the social forces that guide human behavior.

Here is a brief summary:

Generative AI, with its ability to produce realistic text, images, and data, has the potential to significantly impact social science research.  This article explores both the exciting possibilities and potential pitfalls of this new technology.

On the positive side, generative AI could streamline data collection and analysis, making social science research more efficient and allowing researchers to explore new avenues. For example, AI-powered surveys could be more engaging and lead to higher response rates. Additionally, AI could automate tasks like content analysis, freeing up researchers to focus on interpretation and theory building.

However, there are also ethical considerations. AI models can inherit and amplify biases present in the data they're trained on. This could lead to skewed research findings that perpetuate social inequalities. Furthermore, the opaqueness of some AI models can make it difficult to understand how they arrive at their conclusions, raising concerns about transparency and replicability in research.

Overall, generative AI offers a powerful tool for social scientists, but it's crucial to be mindful of the ethical implications and limitations of this technology. Careful development and application are essential to ensure that AI enhances, rather than hinders, our understanding of human behavior.

Tuesday, June 25, 2024

‘I’m dying, you’re not': Those terminally ill ask more states to legalize physician-assisted death

Jesse Bedayn
Updated 6:39 PM EDT, April 12, 2024

On a brisk day at a restaurant outside Chicago, Deb Robertson sat with her teenage grandson to talk about her death.

She’ll probably miss his high school graduation. She declined the extended warranty on her car. Sometimes she wonders who will be at her funeral.

Those things don’t frighten her much. The 65-year-old didn’t cry when she learned two months ago that the cancerous tumors in her liver were spreading, portending a tormented death.

But later, she received a call. A bill moving through the Illinois Legislature to allow certain terminally ill patients to end their own lives with a doctor’s help had made progress.

Then she cried.

“Medical-aid in dying is not me choosing to die,” she says she told her 17-year-old grandson. “I am going to die. But it is my way of having a little bit more control over what it looks like in the end.”

Here is a summary:

The article discusses the ethical and moral debate surrounding physician-assisted death (PAD), also known as medical aid in dying (MAiD). It highlights the desire of terminally ill patients for more control over their end-of-life experience, including the option for a peaceful death facilitated by a doctor.

On one hand, the article presents the perspective of patients like Deb Robertson, who argues that MAiD isn't about choosing to die, but about choosing how to die with dignity on their own terms, avoiding prolonged suffering.

On the other hand, the patchwork of laws across different states raises ethical concerns.  Some states are considering legalizing MAiD, while others are proposing stricter bans. This creates a situation where some patients have to travel to distant states or forgo their wishes entirely.

The article doesn't take a definitive stance on the morality of MAiD, but rather presents the arguments on both sides, leaving the reader to consider the complex ethical questions surrounding end-of-life decisions.