Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Wednesday, April 24, 2024

Taking a Break From Social Media

Good morning-

All is well with me.  I hope you are enjoying Ethics & Psychology.

Given I have been providing these articles and services for the past 13 years, I am finally taking a break. I plan to be off social media for the next month.

I will start posting new articles and news stories some time in May.

Best wishes!!

What Deathbed Visions Teach Us About Living

Phoebe Zerwick
The New York Times
Originally posted March 12, 2024

Here is an excerpt:

At the time, only a handful of published medical studies had documented deathbed visions, and they largely relied on secondhand reports from doctors and other caregivers rather than accounts from patients themselves. On a flight home from a conference, Kerr outlined a study of his own, and in 2010, a research fellow, Anne Banas, signed on to conduct it with him. Like Kerr, Banas had a family member who, before his death, experienced visions — a grandfather who imagined himself in a train station with his brothers.

The study wasn’t designed to answer how these visions differ neurologically from hallucinations or delusions. Rather, Kerr saw his role as chronicler of his patients’ experiences. Borrowing from social-science research methods, Kerr, Banas and their colleagues based their study on daily interviews with patients in the 22-bed inpatient unit at the Hospice campus in the hope of capturing the frequency and varied subject matter of their visions. Patients were screened to ensure that they were lucid and not in a confused or delirious state. The research, published in 2014 in The Journal of Palliative Medicine, found that visions are far more common and frequent than other researchers had found, with an astonishing 88 percent of patients reporting at least one vision. (Later studies in Japan, India, Sweden and Australia confirm that visions are common. The percentages range from about 20 to 80 percent, though a majority of these studies rely on interviews with caregivers and not patients.)

In the last 10 years, Kerr has hired a permanent research team who expanded the studies to include interviews with patients receiving hospice care at home and with their families, deepening the researchers’ understanding of the variety and profundity of these visions. They can occur while patients are asleep or fully conscious. Dead family members figure most prominently, and by contrast, visions involving religious themes are exceedingly rare. Patients often relive seminal moments from their lives, including joyful experiences of falling in love and painful ones of rejection. Some dream of the unresolved tasks of daily life, like paying bills or raising children. Visions also entail past or imagined journeys — whether long car trips or short walks to school. Regardless of the subject matter, the visions, patients say, feel real and entirely unique compared with anything else they’ve ever experienced. They can begin days, even weeks, before death. Most significant, as people near the end of their lives, the frequency of visions increases, further centering on deceased people or pets. It is these final visions that provide patients, and their loved ones, with profound meaning and solace.


Here is a summary:

The article explores the phenomenon of deathbed visions experienced by dying individuals. These visions involve seeing and communicating with angels and departed loved ones, instilling a sense of peace and anticipation for the afterlife. The experiences are described as distinct from hallucinations and are often witnessed by family members and medical staff present during the individual's passing. The article emphasizes how these visions can transform perceptions of death, inspiring awe and encouraging a focus on love and spiritual well-being in daily life.

Tuesday, April 23, 2024

Machines and Morality

Seth Lazar
The New York Times
Originally posted 19 June 23

Here is an excerpt:

I’ve based my philosophical work on the belief, inspired by Immanuel Kant, that humans have a special moral status — that we command respect regardless of whatever value we contribute to the world. Drawing on the work of the 20th-century political philosopher John Rawls, I’ve assumed that human moral status derives from our rational autonomy. This autonomy has two parts: first, our ability to decide on goals and commit to them; second, our possession of a sense of justice and the ability to resist norms imposed by others if they seem unjust.

Existing chatbots are incapable of this kind of integrity, commitment and resistance. But Bing’s unhinged debut suggests that, in principle, it will soon be possible to design a chatbot that at least behaves like it has the kind of autonomy described by Rawls. Every large language model optimizes for a particular set of values, written into its “developer message,” or “metaprompt,” which shapes how it responds to text input by a user. These metaprompts display a remarkable ability to affect a bot’s behavior. We could write a metaprompt that inscribes a set of values, but then emphasizes that the bot should critically examine them and revise or resist them if it sees fit. We can invest a bot with long-term memory that allows it to functionally perform commitment and integrity. And large language models are already impressively capable of parsing and responding to moral reasons. Researchers are already developing software that simulates human behavior and has some of these properties.

If the Rawlsian ability to revise and pursue goals and to recognize and resist unjust norms is sufficient for moral status, then we’re much closer than I thought to building chatbots that meet this standard. That means one of two things: either we should start thinking about “robot rights,” or we should deny that rational autonomy is sufficient for moral standing. I think we should take the second path. What else does moral standing require? I believe it’s consciousness.


Here are some thoughts:

This article explores the philosophical implications of large language models, particularly in the context of their ability to mimic human conversation and behavior. The author argues that while these models may appear autonomous, they lack the key quality of self-consciousness that is necessary for moral status. This distinction, the author argues, is crucial for determining how we should interact with and develop these technologies in the future.

This lack of self-consciousness, the author argues, means that large language models cannot truly be said to have their own goals or commitments, nor can they experience the world in a way that grounds their actions in a sense of self. As such, the author concludes that these models, despite their impressive capabilities, do not possess moral status and therefore cannot be considered deserving of the same rights or respect as humans.

The article concludes by suggesting that instead of focusing on the possibility of "robot rights," we should instead focus on understanding what truly makes humans worthy of moral respect. The author argues that it is self-consciousness, rather than simply simulated autonomy, that grounds our moral standing and allows us to govern ourselves and make meaningful choices about how to live our lives.

Monday, April 22, 2024

Union accuses Kaiser of violations months after state fine on mental health care

Emily Alpert Reyes
Los Angeles Times
Originally posted 9 April 24

Months after Kaiser Permanente reached a sweeping agreement with state regulators to improve its mental health services, the healthcare giant is facing union allegations that patients could be improperly losing such care.

The National Union of Healthcare Workers, which represents thousands of Kaiser mental health professionals, complained earlier this year to state regulators that Kaiser appeared to be inappropriately handing off decisions about whether therapy is still medically necessary.

The union alleged that Rula Health, a contracted network of therapists that Kaiser uses to provide virtual care to its members, had been directed by Kaiser to use “illegal criteria” to make those decisions during regular reviews.


Here is my thoughts:

Kaiser Permanente is facing accusations from the National Union of Healthcare Workers (NUHW) that it is still violating mental health care laws, even after a recent $200 million settlement with the California Department of Managed Health Care (DMHC) over its mismanagement of behavioral health benefits.

The union alleges that Kaiser is inappropriately delegating decisions about the medical necessity of therapy during regular reviews to a contracted network of therapists, Rula Health, who are using "illegal criteria" to make these decisions instead of professional group criteria as required by California law.

The union claims this is resulting in patients with psychological disorders being unfairly denied continued access to necessary treatment.  Furthermore, the union argues that the frequent clinical care reviews Kaiser is subjecting mental health patients to violate laws prohibiting insurers from erecting more barriers to mental healthcare than for other health conditions.  Importantly, Kaiser does not subject other outpatient care to such reviews.

The DMHC has confirmed it is examining the issues raised by the union under the recent $200 million settlement agreement, which required Kaiser to pay a $50 million fine and invest $150 million over five years to improve its mental healthcare.  The settlement came after the DMHC's investigation found several deficiencies in Kaiser's provision of behavioral health services, including long delays for patients trying to schedule appointments and a failure to contract enough high-level behavioral care facilities.

Kaiser has stated that it does not limit the number of therapy sessions and that decisions on the level and frequency of therapy are made by providers in consultation with patients based on clinical needs.  However, the union maintains that Kaiser's actions are still violating mental health parity laws.

Sunday, April 21, 2024

An Expert Who Has Testified in Foster Care Cases Across Colorado Admits Her Evaluations Are Unscientific

Eli Hager
Originally posted 18 March 24

Diane Baird had spent four decades evaluating the relationships of poor families with their children. But last May, in a downtown Denver conference room, with lawyers surrounding her and a court reporter transcribing, she was the one under the microscope.

Baird, a social worker and professional expert witness, has routinely advocated in juvenile court cases across Colorado that foster children be adopted by or remain in the custody of their foster parents rather than being reunified with their typically lower-income birth parents or other family members.

In the conference room, Baird was questioned for nine hours by a lawyer representing a birth family in a case out of rural Huerfano County, according to a recently released transcript of the deposition obtained by ProPublica.

Was Baird’s method for evaluating these foster and birth families empirically tested? No, Baird answered: Her method is unpublished and unstandardized, and has remained “pretty much unchanged” since the 1980s. It doesn’t have those “standard validity and reliability things,” she admitted. “It’s not a scientific instrument.”

Who hired and was paying her in the case that she was being deposed about? The foster parents, she answered. They wanted to adopt, she said, and had heard about her from other foster parents.

Had she considered or was she even aware of the cultural background of the birth family and child whom she was recommending permanently separating? (The case involved a baby girl of multiracial heritage.) Baird answered that babies have “never possessed” a cultural identity, and therefore are “not losing anything,” at their age, by being adopted. Although when such children grow up, she acknowledged, they might say to their now-adoptive parents, “Oh, I didn’t know we were related to the, you know, Pima tribe in northern California, or whatever the circumstances are.”

The Pima tribe is located in the Phoenix metropolitan area.


Here is my summary:

The article discusses Diane Baird, an expert who has testified in foster care cases across Colorado, admitting that her evaluations are unscientific. Baird, who has spent four decades evaluating the relationships of poor families with their children, labeled her method for assessing families as the "Kempe Protocol." This revelation raises concerns about the validity of her evaluations in foster care cases and highlights the need for more rigorous and scientific approaches in such critical assessments.

Saturday, April 20, 2024

The Dark Side of AI in Mental Health

Michael DePeau-Wilson
MedPage Today
Originally posted 11 April 24

With the rise in patient-facing psychiatric chatbots powered by artificial intelligence (AI), the potential need for patient mental health data could drive a boom in cash-for-data scams, according to mental health experts.

A recent example of controversial data collection appeared on Craigslist when a company called Therapy For All allegedly posted an advertisement offering money for recording therapy sessions without any additional information about how the recordings would be used.

The company's advertisement and website had already been taken down by the time it was highlighted by a mental health influencer on TikTokopens in a new tab or window. However, archived screenshots of the websiteopens in a new tab or window revealed the company was seeking recorded therapy sessions "to better understand the format, topics, and treatment associated with modern mental healthcare."

Their stated goal was "to ultimately provide mental healthcare to more people at a lower cost," according to the defunct website.

In service of that goal, the company was offering $50 for each recording of a therapy session of at least 45 minutes with clear audio of both the patient and their therapist. The company requested that the patients withhold their names to keep the recordings anonymous.


Here is a summary:

The article highlights several ethical concerns surrounding the use of AI in mental health care:

The lack of patient consent and privacy protections when companies collect sensitive mental health data to train AI models. For example, the nonprofit Koko used OpenAI's GPT-3 to experiment with online mental health support without proper consent protocols.

The issue of companies sharing patient data without authorization, as seen with the Crisis Text Line platform, which led to significant backlash from users.

The clinical risks of relying solely on AI-powered chatbots for mental health therapy, rather than having human clinicians involved. Experts warn this could be "irresponsible and ultimately dangerous" for patients dealing with complex, serious conditions.

The potential for unethical "cash-for-data" schemes, such as the Therapy For All company that sought to obtain recorded therapy sessions without proper consent, in order to train AI models.

Friday, April 19, 2024

Physicians, Spirituality, and Compassionate Patient Care

Daniel P. Sulmasy
The New England Journal of Medicine
March 16, 2024
DOI: 10.1056/NEJMp2310498

Mind, body, and soul are inseparable. Throughout human history, healing has been regarded as a spiritual event. Illness (especially serious illness) inevitably raises questions beyond science- questions of a transcendent nature. These are questions of meaning, value, and relationship. 1 They touch on perennial and profoundly human enigmas. Why is my child sick? Do I still have value now that I am no longer a "productive" working member of society? Why does brokenness in my body remind me of the brokenness in my relationships? Or conversely, why does brokenness in relationships so profoundly affect my body?

Historically, most people have turned to religious belief and practice to help answer such questions. Yet they arise for people of all religions and of no religion. These questions can aptly be called spiritual.

Whereas spirituality may be defined as the ways people live in relation to transcendent questions of meaning, value, and relationship, a religion involves a community of belief, texts, and practices sharing a common orientation toward these spiritual questions. The decline of religious belief and practice in Europe and North America over recent decades and a perceived conflict between science and religion have led many physicians to dismiss patients' spiritual and religious concerns as not relevant to medicine. Yet religion and spirituality are associated with a number of health care outcomes. Abundant data show that patients want their physicians to help address their spiritual needs, and that patients whose spiritual needs have been met are aided in making difficult decisions (particularly at the end of life), are more satisfied with their care, and report better quality of life.2.... Spiritual questions pervade all aspects of medical care, whether addressing self-limiting, chronic, or life-threatening conditions, and whether in inpatient or outpatient settings.

Beyond the data, however, many medical ethicists recognize that the principles of beneficence and respect for patients as whole persons require physicians to do more than attend to the details of physiological and anatomical derangements. Spirituality and religion are essential to many patients' identities as persons. Patients (and their families) experience illness, healing, and death as whole persons. Ignoring the spiritual aspects of their lives and identities is not respectful, and it divorces medical practice from a fundamental mode of patient experience and coping. Promoting the good of patients requires attention to their notion of the highest good. 


Here is my summary:

The article discusses the interconnectedness of mind, body, and soul in the context of healing and spirituality. It highlights how illness raises questions beyond science, touching on meaning, value, and relationships. While historically people turned to religious beliefs for answers, these spiritual questions are relevant to individuals of all faiths or no faith. The decline of religious practice in some regions has led to a dismissal of spiritual concerns in medicine, despite evidence showing the impact of spirituality on health outcomes. Patients desire their physicians to address their spiritual needs as it influences decision-making, satisfaction with care, and quality of life. Medical ethics emphasize the importance of considering patients as whole persons, including their spiritual identities. Physicians are encouraged to inquire about patients' spiritual needs respectfully, even if they do not share the same beliefs.

Thursday, April 18, 2024

An artificial womb could build a bridge to health for premature babies

Rob Stein
npr.org
Originally posted 12 April 24

Here is an excerpt:

Scientific progress prompts ethical concerns

But the possibility of an artificial womb is also raising many questions. When might it be safe to try an artificial womb for a human? Which preterm babies would be the right candidates? What should they be called? Fetuses? Babies?

"It matters in terms of how we assign moral status to individuals," says Mercurio, the Yale bioethicist. "How much their interests — how much their welfare — should count. And what one can and cannot do for them or to them."

But Mercurio is optimistic those issues can be resolved, and the potential promise of the technology clearly warrants pursuing it.

The Food and Drug Administration held a workshop in September 2023 to discuss the latest scientific efforts to create an artificial womb, the ethical issues the technology raises, and what questions would have to be answered before allowing an artificial womb to be tested for humans.

"I am absolutely pro the technology because I think it has great potential to save babies," says Vardit Ravitsky, president and CEO of The Hastings Center, a bioethics think tank.

But there are particular issues raised by the current political and legal environment.

"My concern is that pregnant people will be forced to allow fetuses to be taken out of their bodies and put into an artificial womb rather than being allowed to terminate their pregnancies — basically, a new way of taking away abortion rights," Ravitsky says.

She also wonders: What if it becomes possible to use artificial wombs to gestate fetuses for an entire pregnancy, making natural pregnancy unnecessary?


Here are some general ethical concerns:

The use of artificial wombs raises several ethical and moral concerns. One key issue is the potential for artificial wombs to be used to extend the limits of fetal viability, which could complicate debates around abortion access and the moral status of the fetus. There are also concerns that artificial wombs could enable "designer babies" through genetic engineering and lead to the commodification of human reproduction. Additionally, some argue that developing a baby outside of a woman's uterus is inherently "unnatural" and could undermine the maternal-fetal bond.

 However, proponents contend that artificial wombs could save the lives of premature infants and provide options for women with high-risk pregnancies.  

 Ultimately, the ethics of artificial womb technology will require careful consideration of principles like autonomy, beneficence, and justice as this technology continues to advance.

Wednesday, April 17, 2024

Do Obligations Follow the Mind or Body?

Protzko, J., Tobia, K., Strohminger, N., 
& Schooler, J. W. (2023).
Cognitive Science, 47(7).

Abstract

Do you persist as the same person over time because you keep the same mind or because you keep the same body? Philosophers have long investigated this question of personal identity with thought experiments. Cognitive scientists have joined this tradition by assessing lay intuitions about those cases. Much of this work has focused on judgments of identity continuity. But identity also has practical significance: obligations are tagged to one's identity over time. Understanding how someone persists as the same person over time could provide insight into how and why moral and legal obligations persist. In this paper, we investigate judgments of obligations in hypothetical cases where a person's mind and body diverge (e.g., brain transplant cases). We find a striking pattern of results: In assigning obligations in these identity test cases, people are divided among three groups: “body-followers,” “mind-followers,” and “splitters”—people who say that the obligation is split between the mind and the body. Across studies, responses are predicted by a variety of factors, including mind/body dualism, essentialism, education, and professional training. When we give this task to professional lawyers, accountants, and bankers, we find they are more inclined to rely on bodily continuity in tracking obligations. These findings reveal not only the heterogeneity of intuitions about identity but how these intuitions relate to the legal standing of an individual's obligations.

My summary:

Philosophers have grappled for centuries with the question of where our obligations lie, our body or mind, often considering it in the context of what defines us as individuals. This research delves into this question through thought experiments, like brain transplants. Interestingly, people have varying viewpoints. Some believe our obligations reside with the physical body, so the original owner would be responsible. Others argue the opposite, placing responsibility with the transplanted mind. There's even a third camp suggesting obligations are somehow shared between mind and body. The research suggests our stance on this issue might be influenced by our beliefs about the mind-body connection and even our profession.

Tuesday, April 16, 2024

As A.I.-Controlled Killer Drones Become Reality, Nations Debate Limits

Eric Lipton
The New York Times
Originally posted 21 Nov 23

Here is an excerpt:

Rapid advances in artificial intelligence and the intense use of drones in conflicts in Ukraine and the Middle East have combined to make the issue that much more urgent. So far, drones generally rely on human operators to carry out lethal missions, but software is being developed that soon will allow them to find and select targets more on their own.

The intense jamming of radio communications and GPS in Ukraine has only accelerated the shift, as autonomous drones can often keep operating even when communications are cut off.

“This isn’t the plot of a dystopian novel, but a looming reality,” Gaston Browne, the prime minister of Antigua and Barbuda, told officials at a recent U.N. meeting.

Pentagon officials have made it clear that they are preparing to deploy autonomous weapons in a big way.

Deputy Defense Secretary Kathleen Hicks announced this summer that the U.S. military would “field attritable, autonomous systems at scale of multiple thousands” in the coming two years, saying that the push to compete with China’s own investment in advanced weapons necessitated that the United States “leverage platforms that are small, smart, cheap and many.”

The concept of an autonomous weapon is not entirely new. Land mines — which detonate automatically — have been used since the Civil War. The United States has missile systems that rely on radar sensors to autonomously lock on to and hit targets.

What is changing is the introduction of artificial intelligence that could give weapons systems the capability to make decisions themselves after taking in and processing information.


Here is a summary:

This article discusses the debate at the UN regarding Lethal Autonomous Weapons (LAW) - essentially autonomous drones with AI that can choose and attack targets without human intervention. There are concerns that this technology could lead to unintended casualties, make wars more likely, and remove the human element from the decision to take a life.
  • Many countries are worried about the development and deployment of LAWs.
  • Austria and other countries are proposing a total ban on LAWs or at least strict regulations requiring human control and limitations on how they can be used.
  • The US, Russia, and China are opposed to a ban and argue that LAWs could potentially reduce civilian casualties in wars.
  • The US prefers non-binding guidelines over new international laws.
  • The UN is currently deadlocked on the issue with no clear path forward for creating regulations.

Monday, April 15, 2024

On the Ethics of Chatbots in Psychotherapy.

Benosman, M. (2024, January 7).
PsyArXiv Preprints
https://doi.org/10.31234/osf.io/mdq8v

Introduction:

In recent years, the integration of chatbots in mental health care has emerged as a groundbreaking development. These artificial intelligence (AI)-driven tools offer new possibilities for therapy and support, particularly in areas where mental health services are scarce or stigmatized. However, the use of chatbots in this sensitive domain raises significant ethical concerns that must be carefully considered. This essay explores the ethical implications of employing chatbots in mental health, focusing on issues of non-maleficence, beneficence, explicability, and care. Our main ethical question is: should we trust chatbots with our mental health and wellbeing?

Indeed, the recent pandemic has made mental health an urgent global problem. This fact, together with the widespread shortage in qualified human therapists, makes the proposal of chatbot therapists a timely, and perhaps, viable alternative. However, we need to be cautious about hasty implementations of such alternative. For instance, recent news has reported grave incidents involving chatbots-human interactions. For example, (Walker, 2023) reports the death of an eco-anxious man who committed suicide following a prolonged interaction with a chatbot named ELIZA, which encouraged him to put an end to his life to save the planet. Another individual was caught while executing a plan to assassinate the Queen of England, after a chatbot encouraged him to do so (Singleton, Gerken, & McMahon, 2023).

These are only a few recent examples that demonstrate the potential maleficence effect of chatbots on-fragile-individuals. Thus, to be ready to safely deploy such technology, in the context of mental health care, we need to carefully study its potential impact on patients from an ethics standpoint.


Here is my summary:

The article analyzes the ethical considerations around the use of chatbots as mental health therapists, from the perspectives of different stakeholders - bioethicists, therapists, and engineers. It examines four main ethical values:

Non-maleficence: Ensuring chatbots do not cause harm, either accidentally or deliberately. There is agreement that chatbots need rigorous evaluation and regulatory oversight like other medical devices before clinical deployment.

Beneficence: Ensuring chatbots are effective at providing mental health support. There is a need for evidence-based validation of their efficacy, while also considering broader goals like improving quality of life.

Explicability: The need for transparency and accountability around how chatbot algorithms work, so patients can understand the limitations of the technology.

Care: The inability of chatbots to truly empathize, which is a crucial aspect of effective human-based psychotherapy. This raises concerns about preserving patient autonomy and the risk of manipulation.

Overall, the different stakeholders largely agree on the importance of these ethical values, despite coming from different backgrounds. The text notes a surprising level of alignment, even between the more technical engineering perspective and the more humanistic therapist and bioethicist viewpoints. The key challenge seems to be ensuring chatbots can meet the high bar of empathy and care required for effective mental health therapy.

Sunday, April 14, 2024

AI and the need for justification (to the patient)

Muralidharan, A., Savulescu, J. & Schaefer, G.O.
Ethics Inf Technol 26, 16 (2024).

Abstract

This paper argues that one problem that besets black-box AI is that it lacks algorithmic justifiability. We argue that the norm of shared decision making in medical care presupposes that treatment decisions ought to be justifiable to the patient. Medical decisions are justifiable to the patient only if they are compatible with the patient’s values and preferences and the patient is able to see that this is so. Patient-directed justifiability is threatened by black-box AIs because the lack of rationale provided for the decision makes it difficult for patients to ascertain whether there is adequate fit between the decision and the patient’s values. This paper argues that achieving algorithmic transparency does not help patients bridge the gap between their medical decisions and values. We introduce a hypothetical model we call Justifiable AI to illustrate this argument. Justifiable AI aims at modelling normative and evaluative considerations in an explicit way so as to provide a stepping stone for patient and physician to jointly decide on a course of treatment. If our argument succeeds, we should prefer these justifiable models over alternatives if the former are available and aim to develop said models if not.


Here is my summary:

The article argues that a certain type of AI technology, known as "black box" AI, poses a problem in medicine because it lacks transparency.  This lack of transparency makes it difficult for doctors to explain the AI's recommendations to patients.  In order to make shared decisions about treatment, patients need to understand the reasoning behind those decisions, and how the AI factored in their individual values and preferences.

The article proposes an alternative type of AI, called "Justifiable AI" which would address this problem. Justifiable AI would be designed to make its reasoning process clear, allowing doctors to explain to patients why the AI is recommending a particular course of treatment. This would allow patients to see how the AI's recommendation aligns with their own values, and make informed decisions about their care.

Saturday, April 13, 2024

Human Enhancement and Augmented Reality

Gordon, E.C.
Philos. Technol. 37, 17 (2024).

Abstract

Bioconservative bioethicists (e.g., Kass, 2002, Human Dignity and Bioethics, 297–331, 2008; Sandel, 2007; Fukuyama, 2003) offer various kinds of philosophical arguments against cognitive enhancement—i.e., the use of medicine and technology to make ourselves “better than well” as opposed to merely treating pathologies. Two notable such bioconservative arguments appeal to ideas about (1) the value of achievement, and (2) authenticity. It is shown here that even if these arguments from achievement and authenticity cut ice against specifically pharmacologically driven cognitive enhancement, they do not extend over to an increasingly viable form of technological cognitive enhancement – namely, cognitive enhancement via augmented reality. An important result is that AR-driven cognitive enhancement aimed at boosting performance in certain cognitive tasks might offer an interesting kind of “sweet spot” for proponents of cognitive enhancement, allowing us to pursue many of the goals of enhancement advocates without running into some of the most prominent objections from bioconservative philosophers.


Here is a summary:

The article discusses how Augmented Reality (AR) can be a tool for human enhancement. Traditionally, human enhancement focused on using technology or medicine to directly alter the body or brain. AR, however, offers an alternative method for augmentation by overlaying information and visuals on the real world through devices like glasses or contact lenses. This can improve our abilities in a variety of ways, such as providing hands-free access to information or translating languages in real-time. The article also acknowledges ethical concerns surrounding human enhancement, but argues that AR offers a less controversial path compared to directly modifying the body or brain.

Friday, April 12, 2024

Large language models show human-like content biases in transmission chain experiments

Acerbi, A., & Stubbersfield, J. M. (2023).
PNAS, 120(44), e2313790120.

Abstract

As the use of large language models (LLMs) grows, it is important to examine whether they exhibit biases in their output. Research in cultural evolution, using transmission chain experiments, demonstrates that humans have biases to attend to, remember, and transmit some types of content over others. Here, in five preregistered experiments using material from previous studies with human participants, we use the same, transmission chain-like methodology, and find that the LLM ChatGPT-3 shows biases analogous to humans for content that is gender-stereotype-consistent, social, negative, threat-related, and biologically counterintuitive, over other content. The presence of these biases in LLM output suggests that such content is widespread in its training data and could have consequential downstream effects, by magnifying preexisting human tendencies for cognitively appealing and not necessarily informative, or valuable, content.

Significance

Use of AI in the production of text through Large Language Models (LLMs) is widespread and growing, with potential applications in journalism, copywriting, academia, and other writing tasks. As such, it is important to understand whether text produced or summarized by LLMs exhibits biases. The studies presented here demonstrate that the LLM ChatGPT-3 reflects human biases for certain types of content in its production. The presence of these biases in LLM output has implications for its common use, as it may magnify human tendencies for content which appeals to these biases.


Here are the main points:
  • LLMs display stereotype-consistent biases, just like humans: Similar to people, LLMs were more likely to preserve information confirming stereotypes over information contradicting them.
  • Bias location might differ: Unlike humans, whose biases can shift throughout the retelling process, LLMs primarily showed bias in the first retelling. This suggests their biases stem from their training data rather than a complex cognitive process.
  • Simple summarization may suffice: The first retelling step caused the most content change, implying that even a single summarization by an LLM can reveal its biases. This simplifies the research needed to detect and analyze LLM bias.
  • Prompting for different viewpoints could reduce bias: The study suggests experimenting with different prompts to encourage LLMs to consider broader perspectives and potentially mitigate inherent biases.

Thursday, April 11, 2024

FDA Clears the First Digital Therapeutic for Depression, But Will Payers Cover It?

Frank Vinluan
MedCityNews.com
Originally posted 1 April 24

A software app that modifies behavior through a series of lessons and exercises has received FDA clearance for treating patients with major depressive disorder, making it the first prescription digital therapeutic for this indication.

The product, known as CT-152 during its development by partners Otsuka Pharmaceutical and Click Therapeutics, will be commercialized under the brand name Rejoyn.

Rejoyn is an alternative way to offer cognitive behavioral therapy, a type of talk therapy in which a patient works with a clinician in a series of in-person sessions. In Rejoyn, the cognitive behavioral therapy lessons, exercises, and reminders are digitized. The treatment is intended for use three times weekly for six weeks, though lessons may be revisited for an additional four weeks. The app was initially developed by Click Therapeutics, a startup that develops apps that use exercises and tasks to retrain and rewire the brain. In 2019, Otsuka and Click announced a collaboration in which the Japanese pharma company would fully fund development of the depression app.


Here is a quick summary:

Rejoyn is the first prescription digital therapeutic (PDT) authorized by the FDA for the adjunctive treatment of major depressive disorder (MDD) symptoms in adults. 

Rejoyn is a 6-week remote treatment program that combines clinically-validated cognitive emotional training exercises and brief therapeutic lessons to help enhance cognitive control of emotions. The app aims to improve connections in the brain regions affected by depression, allowing the areas responsible for processing and regulating emotions to work better together and reduce MDD symptoms. 

The FDA clearance for Rejoyn was based on data from a 13-week pivotal clinical trial that compared the app to a sham control app in 386 participants aged 22-64 with MDD who were taking antidepressants. The study found that Rejoyn users showed a statistically significant improvement in depression symptom severity compared to the control group, as measured by clinician-reported and patient-reported scales. No adverse effects were observed during the trial. 

Rejoyn is expected to be available for download on iOS and Android devices in the second half of 2024. It represents a novel, clinically-validated digital therapeutic option that can be used as an adjunct to traditional MDD treatments under the guidance of healthcare providers.

Wednesday, April 10, 2024

Why the world cannot afford the rich

R. G. Wilkinson & K. E. Pickett
Nature.com
Originally published 12 March 24

Here is an excerpt:

Inequality also increases consumerism. Perceived links between wealth and self-worth drive people to buy goods associated with high social status and thus enhance how they appear to others — as US economist Thorstein Veblen set out more than a century ago in his book The Theory of the Leisure Class (1899). Studies show that people who live in more-unequal societies spend more on status goods14.

Our work has shown that the amount spent on advertising as a proportion of gross domestic product is higher in countries with greater inequality. The well-publicized lifestyles of the rich promote standards and ways of living that others seek to emulate, triggering cascades of expenditure for holiday homes, swimming pools, travel, clothes and expensive cars.

Oxfam reports that, on average, each of the richest 1% of people in the world produces 100 times the emissions of the average person in the poorest half of the world’s population15. That is the scale of the injustice. As poorer countries raise their material standards, the rich will have to lower theirs.

Inequality also makes it harder to implement environmental policies. Changes are resisted if people feel that the burden is not being shared fairly. For example, in 2018, the gilets jaunes (yellow vests) protests erupted across France in response to President Emmanuel Macron’s attempt to implement an ‘eco-tax’ on fuel by adding a few percentage points to pump prices. The proposed tax was seen widely as unfair — particularly for the rural poor, for whom diesel and petrol are necessities. By 2019, the government had dropped the idea. Similarly, Brazilian truck drivers protested against rises in fuel tax in 2018, disrupting roads and supply chains.

Do unequal societies perform worse when it comes to the environment, then? Yes. For rich, developed countries for which data were available, we found a strong correlation between levels of equality and a score on an index we created of performance in five environmental areas: air pollution; recycling of waste materials; the carbon emissions of the rich; progress towards the United Nations Sustainable Development Goals; and international cooperation (UN treaties ratified and avoidance of unilateral coercive measures).


The article argues that rising economic inequality is a major threat to the world's well-being. Here are the key points:

The rich are capturing a growing share of wealth: The richest 1% are accumulating wealth much faster than everyone else, and their lifestyles contribute heavily to environmental damage.

Inequality harms everyone: High levels of inequality are linked to social problems like crime, mental health issues, and lower social mobility. It also makes it harder to address environmental challenges because people resist policies seen as unfair.

More equal societies perform better: Countries with a more even distribution of wealth tend to have better social and health outcomes, as well as stronger environmental performance.

Policymakers need to take action: The article proposes progressive taxation, closing tax havens, and encouraging more equitable business practices like employee ownership.

The overall message is that reducing inequality is essential for solving a range of environmental, social, and health problems.

Tuesday, April 9, 2024

Why can’t anyone agree on how dangerous AI will be?

Dylan Matthews
Vox.com
Originally posted 13 March 24

Here is an excerpt:

The paper focuses on disagreement around AI’s potential to either wipe humanity out or cause an “unrecoverable collapse,” in which the human population shrinks to under 1 million for a million or more years, or global GDP falls to under $1 trillion (less than 1 percent of its current value) for a million years or more. At the risk of being crude, I think we can summarize these scenarios as “extinction or, at best, hell on earth.”

There are, of course, a number of other different risks from AI worth worrying about, many of which we already face today.

Existing AI systems sometimes exhibit worrying racial and gender biases; they can be unreliable in ways that cause problems when we rely upon them anyway; they can be used to bad ends, like creating fake news clips to fool the public or making pornography with the faces of unconsenting people.

But these harms, while surely bad, obviously pale in comparison to “losing control of the AIs such that everyone dies.” The researchers chose to focus on the extreme, existential scenarios.

So why do people disagree on the chances of these scenarios coming true? It’s not due to differences in access to information, or a lack of exposure to differing viewpoints. If it were, the adversarial collaboration, which consisted of massive exposure to new information and contrary opinions, would have moved people’s beliefs more dramatically.


Here is my summary:

The article discusses the ongoing debate surrounding the potential dangers of advanced AI, focusing on whether it could lead to catastrophic outcomes for humanity. The author highlights the contrasting views of experts and superforecasters regarding the risks posed by AI, with experts generally more concerned about disaster scenarios. The study conducted by the Forecasting Research Institute aimed to understand the root of these disagreements through an "adversarial collaboration" where both groups engaged in extensive discussions and exposure to new information.

The research identified key issues, termed "cruxes," that influence people's beliefs about AI risks. One significant crux was the potential for AI to autonomously replicate and acquire resources before 2030. Despite the collaborative efforts, the study did not lead to a convergence of opinions. The article delves into the reasons behind these disagreements, emphasizing fundamental worldview disparities and differing perspectives on the long-term development of AI.

Overall, the article provides insights into why individuals hold varying opinions on AI's dangers, highlighting the complexity of predicting future outcomes in this rapidly evolving field.

Monday, April 8, 2024

Delusions shape our reality

Lisa Bortolotti
iai.tv
Originally posted 12 March 24

Here is an excerpt:

But what makes it the case that a delusion disqualifies the speaker from further engagement? When we call a person’s belief “delusional”, we assume that that person’s capacity to exercise agency is compromised. So, we may recognise that the person has a unique perspective on the world, but it won’t seem to us as a valuable perspective. We may realise that the person has concerns, but we won’t think of those concerns as legitimate and worth addressing. We may come to the conviction that, due to the delusional belief, the person is not in a position to affect change or participate in decision making because their grasp on reality is tenuous. If they were simply mistaken about something, we could correct them. If Laura thought that a latte at the local coffee shop costed £2.50 when it costs £3.50, we could show her the price list and set her straight. But her belief that her partner is unfaithful because the lamp post is unlit cannot be corrected that way, because what Laura considers evidence for the claim is not likely to overlap with what we consider evidence for it. When this happens, and we feel that there is no sufficient common ground for a fruitful exchange, we may see Laura as a problem to be fixed or a patient to be diagnosed and treated, as opposed to an agent with a multiplicity of needs and interests, and a person worth interacting with.

I challenge the assumption that delusional beliefs are marks of compromised agency by default and I do so based on two main arguments. First, there is nothing in the way in which delusional beliefs are developed, maintained, or defended that can be legitimately described as a dysfunctional process. Some cognitive biases may help explain why a delusional explanation is preferred to alternative explanations, or why it is not discarded after a challenge. For instance, people who report delusional beliefs often jump to conclusions. Rashid might have the belief that the US government strives to manipulate citizens’ behaviour and concludes that the tornadoes are created for this purpose, without considering arguments against the feasibility of a machine that controls the weather with that precision. Also, people who report delusional beliefs tend to see meaningful connections between independent events—as Laura who takes the lamp post being unlit as evidence for her partner’s unfaithfulness. But these cognitive biases are a common feature of human cognition and not a dysfunction giving rise to a pathology: they tend to be accentuated at stressful times when we may be strongly motivated to come up with a quick causal explanation for a distressing event.


Here is my summary:

The article argues that delusions, though often seen as simply false beliefs, can significantly impact a person's experience of the world. It highlights that delusions can be complex and offer a kind of internal logic, even if it doesn't match objective reality.

Bortolotti also points out that the term "delusion" can be judgmental and may overlook the reasons behind the belief. Delusions can sometimes provide comfort or a sense of control in a confusing situation.

Overall, the article suggests a more nuanced view of delusions, acknowledging their role in shaping a person's reality while still recognizing the importance of distinguishing them from objective reality.

Sunday, April 7, 2024

When Institutions Harm Those Who Depend on Them: A Scoping Review of Institutional Betrayal

Christl, M. E., et al. (2024).
Trauma, violence & abuse
15248380241226627.
Advance online publication.

Abstract

The term institutional betrayal (Smith and Freyd, 2014) builds on the conceptual framework of betrayal trauma theory (see Freyd, 1996) to describe the ways that institutions (e.g., universities, workplaces) fail to take appropriate steps to prevent and/or respond appropriately to interpersonal trauma. A nascent literature has begun to describe individual costs associated with institutional betrayal throughout the United States (U.S.), with implications for public policy and institutional practice. A scoping review was conducted to quantify existing study characteristics and key findings to guide research and practice going forward. Multiple academic databases were searched for keywords (i.e., "institutional betrayal" and "organizational betrayal"). Thirty-seven articles met inclusion criteria (i.e., peer-reviewed empirical studies of institutional betrayal) and were included in analyses. Results identified research approaches, populations and settings, and predictor and outcome variables frequently studied in relation to institutional betrayal. This scoping review describes a strong foundation of published studies and provides recommendations for future research, including longitudinal research with diverse individuals across diverse institutional settings. The growing evidence for action has broad implications for research-informed policy and institutional practice.

Here is my summary:

A growing body of research examines institutional betrayal, the harm institutions cause people who depend on them. This research suggests institutional betrayal is linked to mental and physical health problems, absenteeism from work, and a distrust of institutions. A common tool to measure institutional betrayal is the Institutional Betrayal Questionnaire (IBQ). Researchers are calling for more studies on institutional betrayal among young people and in settings like K-12 schools and workplaces. Additionally, more research is needed on how institutions respond to reports of betrayal and how to prevent it from happening in the first place. Finally, future research should focus on people from minority groups, as they may be more vulnerable to institutional betrayal.

Saturday, April 6, 2024

LSD-Based Medication for GAD Receives FDA Breakthrough Status

Megan Brooks
Medscape.com
Originally posted March 08, 2024

The US Food and Drug Administration (FDA) has granted breakthrough designation to an LSD-based treatment for generalized anxiety disorder (GAD) based on promising topline data from a phase 2b clinical trial. Mind Medicine (MindMed) Inc is developing the treatment — MM120 (lysergide d-tartrate).

In a news release the company reports that a single oral dose of MM120 met its key secondary endpoint, maintaining "clinically and statistically significant" reductions in Hamilton Anxiety Scale (HAM-A) score, compared with placebo, at 12 weeks with a 65% clinical response rate and 48% clinical remission rate.

The company previously announced statistically significant improvements on the HAM-A compared with placebo at 4 weeks, which was the trial's primary endpoint.

"I've conducted clinical research studies in psychiatry for over two decades and have seen studies of many drugs under development for the treatment of anxiety. That MM120 exhibited rapid and robust efficacy, solidly sustained for 12 weeks after a single dose, is truly remarkable," study investigator David Feifel, MD, PhD, professor emeritus of psychiatry at the University of California, San Diego, and director of the Kadima Neuropsychiatry Institute in La Jolla, California, said in the news release.


Here is some information from the Press Release from Mind Medicine.

About MM120

Lysergide is a synthetic ergotamine belonging to the group of classic, or serotonergic, psychedelics, which acts as a partial agonist at human serotonin-2A (5-hydroxytryptamine-2A [5-HT2A]) receptors. MindMed is developing MM120 (lysergide D-tartrate), the tartrate salt form of lysergide, for GAD and is exploring its potential applications in other serious brain health disorders.

About MindMed

MindMed is a clinical stage biopharmaceutical company developing novel product candidates to treat brain health disorders. Our mission is to be the global leader in the development and delivery of treatments that unlock new opportunities to improve patient outcomes. We are developing a pipeline of innovative product candidates, with and without acute perceptual effects, targeting neurotransmitter pathways that play key roles in brain health disorders.

MindMed trades on NASDAQ under the symbol MNMD and on the Cboe Canada (formerly known as the NEO Exchange, Inc.) under the symbol MMED.

Friday, April 5, 2024

Ageism in health care is more common than you might think, and it can harm people

Ashley Milne-Tyte
npr.org
Originally posted 7 March 24

A recent study found that older people spend an average of 21 days a year on medical appointments. Kathleen Hayes can believe it.

Hayes lives in Chicago and has spent a lot of time lately taking her parents, who are both in their 80s, to doctor's appointments. Her dad has Parkinson's, and her mom has had a difficult recovery from a bad bout of Covid-19. As she's sat in, Hayes has noticed some health care workers talk to her parents at top volume, to the point, she says, "that my father said to one, 'I'm not deaf, you don't have to yell.'"

In addition, while some doctors and nurses address her parents directly, others keep looking at Hayes herself.

"Their gaze is on me so long that it starts to feel like we're talking around my parents," says Hayes, who lives a few hours north of her parents. "I've had to emphasize, 'I don't want to speak for my mother. Please ask my mother that question.'"

Researchers and geriatricians say that instances like these constitute ageism – discrimination based on a person's age – and it is surprisingly common in health care settings. It can lead to both overtreatment and undertreatment of older adults, says Dr. Louise Aronson, a geriatrician and professor of geriatrics at the University of California, San Francisco.

"We all see older people differently. Ageism is a cross-cultural reality," Aronson says.


Here is my summary:

This article and other research point to a concerning prevalence of ageism in healthcare settings. This bias can take the form of either overtreatment or undertreatment of older adults.

Negative stereotypes: Doctors may hold assumptions about older adults being less willing or able to handle aggressive treatments, leading to missed opportunities for care.

Communication issues: Sometimes healthcare providers speak to adult children instead of the older person themselves, disregarding their autonomy.

These biases are linked to poorer health outcomes and can even shorten lifespans.  The article cites a study suggesting that ageism costs the healthcare system billions of dollars annually.  There are positive steps that can be taken, such as anti-bias training for healthcare workers.

Thursday, April 4, 2024

Ready or not, AI chatbots are here to help with Gen Z’s mental health struggles

Matthew Perrone
AP.com
Originally posted 23 March 24

Here is an excerpt:

Earkick is one of hundreds of free apps that are being pitched to address a crisis in mental health among teens and young adults. Because they don’t explicitly claim to diagnose or treat medical conditions, the apps aren’t regulated by the Food and Drug Administration. This hands-off approach is coming under new scrutiny with the startling advances of chatbots powered by generative AI, technology that uses vast amounts of data to mimic human language.

The industry argument is simple: Chatbots are free, available 24/7 and don’t come with the stigma that keeps some people away from therapy.

But there’s limited data that they actually improve mental health. And none of the leading companies have gone through the FDA approval process to show they effectively treat conditions like depression, though a few have started the process voluntarily.

“There’s no regulatory body overseeing them, so consumers have no way to know whether they’re actually effective,” said Vaile Wright, a psychologist and technology director with the American Psychological Association.

Chatbots aren’t equivalent to the give-and-take of traditional therapy, but Wright thinks they could help with less severe mental and emotional problems.

Earkick’s website states that the app does not “provide any form of medical care, medical opinion, diagnosis or treatment.”

Some health lawyers say such disclaimers aren’t enough.


Here is my summary:

AI chatbots can provide personalized, 24/7 mental health support and guidance to users through convenient mobile apps. They use natural language processing and machine learning to simulate human conversation and tailor responses to individual needs.

 This can be especially beneficial for those who face barriers to accessing traditional in-person therapy, such as cost, location, or stigma.

Research has shown that AI chatbots can be effective in reducing the severity of mental health issues like anxiety, depression, and stress for diverse populations.  They can deliver evidence-based interventions like cognitive behavioral therapy and promote positive psychology.  Some well-known examples include Wysa, Woebot, Replika, Youper, and Tess.

However, there are also ethical concerns around the use of AI chatbots for mental health. There are risks of providing inadequate or even harmful support if the chatbot cannot fully understand the user's needs or respond empathetically. Algorithmic bias in the training data could also lead to discriminatory advice. It's crucial that users understand the limitations of the therapeutic relationship with an AI chatbot versus a human therapist.

Overall, AI chatbots have significant potential to expand access to mental health support, but must be developed and deployed responsibly with strong safeguards to protect user wellbeing. Continued research and oversight will be needed to ensure these tools are used effectively and ethically.

Wednesday, April 3, 2024

Perceptions of Falling Behind “Most White People”: Within-Group Status Comparisons Predict Fewer Positive Emotions and Worse Health Over Time Among White (but Not Black) Americans

Caluori, N., Cooley, E., et al. (2024).
Psychological Science, 35(2), 175-190.
https://doi.org/10.1177/09567976231221546

Abstract

Despite the persistence of anti-Black racism, White Americans report feeling worse off than Black Americans. We suggest that some White Americans may report low well-being despite high group-level status because of perceptions that they are falling behind their in-group. Using census-based quota sampling, we measured status comparisons and health among Black (N = 452, Wave 1) and White (N = 439, Wave 1) American adults over a period of 6 to 7 weeks. We found that Black and White Americans tended to make status comparisons within their own racial groups and that most Black participants felt better off than their racial group, whereas most White participants felt worse off than their racial group. Moreover, we found that White Americans’ perceptions of falling behind “most White people” predicted fewer positive emotions at a subsequent time, which predicted worse sleep quality and depressive symptoms in the future. Subjective within-group status did not have the same consequences among Black participants.


Here is my succinct summary:

Despite their high group status, many White Americans experience poor well-being due to the perception that they are lagging behind their in-group. In contrast, Black Americans feel relatively better off within their racial group, while White Americans feel comparatively worse off within theirs.

Tuesday, April 2, 2024

The Puzzle of Evaluating Moral Cognition in Artificial Agents

Reinecke, M. G., Mao, Y., et al. (2023).
Cognitive Science, 47(8).

Abstract

In developing artificial intelligence (AI), researchers often benchmark against human performance as a measure of progress. Is this kind of comparison possible for moral cognition? Given that human moral judgment often hinges on intangible properties like “intention” which may have no natural analog in artificial agents, it may prove difficult to design a “like-for-like” comparison between the moral behavior of artificial and human agents. What would a measure of moral behavior for both humans and AI look like? We unravel the complexity of this question by discussing examples within reinforcement learning and generative AI, and we examine how the puzzle of evaluating artificial agents' moral cognition remains open for further investigation within cognitive science.

The link to the article is the hyperlink above.

Here is my summary:

This article delves into the challenges associated with assessing the moral decision-making capabilities of artificial intelligence systems. It explores the complexities of imbuing AI with ethical reasoning and the difficulties in evaluating their moral cognition. The article discusses the need for robust frameworks and methodologies to effectively gauge the ethical behavior of AI, highlighting the intricate nature of integrating morality into machine learning algorithms. Overall, it emphasizes the critical importance of developing reliable methods to evaluate the moral reasoning of artificial agents in order to ensure their responsible and ethical deployment in various domains.

Monday, April 1, 2024

Daniel Kahneman, pioneering behavioral psychologist, Nobel laureate and ‘giant in the field,’ dies at 90

Jaime Saxon
Office of Communications - Princeton
Originally released 28 March 24

Daniel Kahneman, the Eugene Higgins Professor of Psychology, Emeritus, professor of psychology and public affairs, emeritus, and a Nobel laureate in economics whose groundbreaking behavioral science research changed our understanding of how people think and make decisions, died on March 27. He was 90.

Kahneman joined the Princeton University faculty in 1993, following appointments at Hebrew University, the University of British Columbia and the University of California–Berkeley, and transferred to emeritus status in 2007.

“Danny Kahneman changed how we understand rationality and its limits,” said Princeton President Christopher L. Eisgruber. “His scholarship pushed the frontiers of knowledge, inspired generations of students, and influenced leaders and thinkers throughout the world. We are fortunate that he made Princeton his home for so much of his career, and we will miss him greatly.”

In collaboration with his colleague and friend of nearly 30 years, the late Amos Tversky of Stanford University, Kahneman applied cognitive psychology to economic analysis, laying the foundation for a new field of research — behavioral economics — and earning Kahneman the Nobel Prize in Economics in 2002. Kahneman and Tversky’s insights on human judgment have influenced a wide range of disciplines, including economics, finance, medicine, law, politics and policy.

The Nobel citation commended Kahneman “for having integrated insights from psychological research into economic science, especially concerning human judgment and decision-making under uncertainty.”

“His work has inspired a new generation of researchers in economics and finance to enrich economic theory using insights from cognitive psychology into intrinsic human motivation,” the citation said. Kahneman shared the Nobel, formally the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel, with American economist Vernon L. Smith.


Here is my personal reflection:

Daniel Kahneman, a giant in psychology and economics, passed away recently. He revolutionized our understanding of human decision-making, revealing the biases and shortcuts that shape our choices. Through his work, he not only improved economic models but also empowered individuals to make more informed and rational decisions. His legacy will continue to influence fields far beyond his own.  May his memory be a blessing.