Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Wednesday, May 31, 2023

Can AI language models replace human participants?

Dillon, D, Tandon, N., Gu, Y., & Gray, K.
Trends in Cognitive Sciences
May 10, 2023


Recent work suggests that language models such as GPT can make human-like judgments across a number of domains. We explore whether and when language models might replace human participants in psychological science. We review nascent research, provide a theoretical model, and outline caveats of using AI as a participant.


Does GPT make human-like judgments?

We initially doubted the ability of LLMs to capture human judgments but, as we detail in Box 1, the moral judgments of GPT-3.5 were extremely well aligned with human moral judgments in our analysis (r= 0.95;
full details at https://nikett.github.io/gpt-as-participant). Human morality is often argued to be especially difficult for language models to capture and yet we found powerful alignment between GPT-3.5 and human judgments.

We emphasize that this finding is just one anecdote and we do not make any strong claims about the extent to which LLMs make human-like judgments, moral or otherwise. Language models also might be especially good at predicting moral judgments because moral judgments heavily hinge on the structural features of scenarios, including the presence of an intentional agent, the causation of damage, and a vulnerable victim, features that language models may have an easy time detecting.  However, the results are intriguing.

Other researchers have empirically demonstrated GPT-3’s ability to simulate human participants in domains beyond moral judgments, including predicting voting choices, replicating behavior in economic games, and displaying human-like problem solving and heuristic judgments on scenarios from cognitive
psychology. LLM studies have also replicated classic social science findings including the Ultimatum Game and the Milgram experiment. One company (http://syntheticusers.com) is expanding on these
findings, building infrastructure to replace human participants and offering ‘synthetic AI participants’
for studies.


From Caveats and looking ahead

Language models may be far from human, but they are trained on a tremendous corpus of human expression and thus they could help us learn about human judgments. We encourage scientists to compare simulated language model data with human data to see how aligned they are across different domains and populations.  Just as language models like GPT may help to give insight into human judgments, comparing LLMs with human judgments can teach us about the machine minds of LLMs; for example, shedding light on their ethical decision making.

Lurking under the specific concerns about the usefulness of AI language models as participants is an age-old question: can AI ever be human enough to replace humans? On the one hand, critics might argue that AI participants lack the rationality of humans, making judgments that are odd, unreliable, or biased. On the other hand, humans are odd, unreliable, and biased – and other critics might argue that AI is just too sensible, reliable, and impartial.  What is the right mix of rational and irrational to best capture a human participant?  Perhaps we should ask a big sample of human participants to answer that question. We could also ask GPT.

Tuesday, May 30, 2023

Are We Ready for AI to Raise the Dead?

Jack Holmes
Esquire Magazine
Originally posted 4 May 24

Here is an excerpt:

You can see wonderful possibilities here. Some might find comfort in hearing their mom’s voice, particularly if she sounds like she really sounded and gives the kind of advice she really gave. But Sandel told me that when he presents the choice to students in his ethics classes, the reaction is split, even as he asks in two different ways. First, he asks whether they’d be interested in the chatbot if their loved one bequeathed it to them upon their death. Then he asks if they’d be interested in building a model of themselves to bequeath to others. Oh, and what if a chatbot is built without input from the person getting resurrected? The notion that someone chose to be represented posthumously in a digital avatar seems important, but even then, what if the model makes mistakes? What if it misrepresents—slanders, even—the dead?

Soon enough, these questions won’t be theoretical, and there is no broad agreement about whom—or even what—to ask. We’re approaching a more fundamental ethical quandary than we often hear about in discussions around AI: human bias embedded in algorithms, privacy and surveillance concerns, mis- and disinformation, cheating and plagiarism, the displacement of jobs, deepfakes. These issues are really all interconnected—Osama bot Laden might make the real guy seem kinda reasonable or just preach jihad to tweens—and they all need to be confronted. We think a lot about the mundane (kids cheating in AP History) and the extreme (some advanced AI extinguishing the human race), but we’re more likely to careen through the messy corridor in between. We need to think about what’s allowed and how we’ll decide.


Our governing troubles are compounded by the fact that, while a few firms are leading the way on building these unprecedented machines, the technology will soon become diffuse. More of the codebase for these models is likely to become publicly available, enabling highly talented computer scientists to build their own in the garage. (Some folks at Stanford have already built a ChatGPT imitator for around $600.) What happens when some entrepreneurial types construct a model of a dead person without the family’s permission? (We got something of a preview in April when a German tabloid ran an AI-generated interview with ex–Formula 1 driver Michael Schumacher, who suffered a traumatic brain injury in 2013. His family threatened to sue.) What if it’s an inaccurate portrayal or it suffers from what computer scientists call “hallucinations,” when chatbots spit out wildly false things? We’ve already got revenge porn. What if an old enemy constructs a false version of your dead wife out of spite? “There’s an important tension between open access and safety concerns,” Reich says. “Nuclear fusion has enormous upside potential,” too, he adds, but in some cases, open access to the flesh and bones of AI models could be like “inviting people around the world to play with plutonium.”

Yes, there was a Black Mirror episode (Be Right Back) about this issue.  The wiki is here.

Monday, May 29, 2023


Almeida, G., Struchiner, N., Hannikainen, I.
(April 17, 2023). Kevin Tobia (Ed.), 
Cambridge Handbook of Experimental Jurisprudence. 
Cambridge University Press, Forthcoming


Rules are ubiquitous. They figure prominently in all kinds of practical reasoning. Rules are especially important in jurisprudence, occupying a prominent role in answers to the question of “what is law?” In this chapter, we start by reviewing the evidence showing that both textual and extra-textual elements exert influence over rule violation judgments (section II). Most studies about rules contrast text with an extra-textual element identified as the “purpose” or “spirit” of the rule. But what counts as the purpose or the spirit of a rule? Is it the goal intended by the rule maker? Or is purpose necessarily moral? Section III reviews the results of experiments designed to answer these questions. These studies show that the extra-textual element that's relevant for the folk concept of rule is moral in nature. Section IV turns to the different explanations that have been entertained in the literature for the pattern of results described in Sections II and III. In section V we discuss some other extra-textual elements that have been investigated in the literature. Finally, in section VI, we connect the results about rules with other issues in legal philosophy. We conclude with a brief discussion of future directions.


In this chapter, we have provided an overview of the experimental jurisprudence of rules. We started by reviewing evidence that shows that extra-textual elements influence rule violation judgments (section II). We then have seen that those elements are likely moral in nature (section III). There are several ways to conceptualize the relationship between the moral and descriptive elements at play in rule violation judgments. We have reviewed some of them in section IV, where we argued that the evidence favors the hypothesis that the concept of rule has a dual character structure. In section V, we reviewed some recent studies showing that other elements, such as enforcement, also play a role in the concept of rule. Finally, in section VI, we considered the implications of these results for some other debates in legal philosophy.

While we have focused on research developed within experimental jurisprudence, empirical work in moral psychology and experimental philosophy have investigated several other questions related to rules which might be of interest for legal philosophers, such as closure rules and the process of learning rules (Nichols, 2004, 2021). But an even larger set of questions about the concept of rule haven’t been explored from an empirical perspective yet. We will end this chapter by discussing a few of them.

If you do legal work, this chapter may help with your expertise. The authors explore how ordinary people understand the law. Are they more intuitive in terms of interpretation or do they think that law is intrinsically moral?

Sunday, May 28, 2023

Above the law? How motivated moral reasoning shapes evaluations of high performer unethicality

Campbell, E. M., Welsh, D. T., & Wang, W. (2023).
Journal of Applied Psychology.
Advance online publication.


Recent revelations have brought to light the misconduct of high performers across various fields and occupations who were promoted up the organizational ladder rather than punished for their unethical behavior. Drawing on principles of motivated moral reasoning, we investigate how employee performance biases supervisors’ moral judgment of employee unethical behavior and how supervisors’ performance-focus shapes how they account for moral judgments in promotion recommendations. We test our model in three studies: a field study of 587 employees and their 124 supervisors at a Fortune 500 telecom company, an experiment with two samples of working adults, and an experiment that directly varied explanatory mechanisms. Evidence revealed a moral double standard such that supervisors rendered less punitive judgment of the unethical acts of higher performing employees. In turn, supervisors’ bottom-line mentality (i.e., fixation on achieving results) influenced the degree to which they incorporated their punitive judgments into promotability considerations. By revealing the moral leniency afforded to higher performers and the uneven consequences meted out by supervisors, our results carry implications for behavioral ethics research and for organizations seeking to retain and promote their higher performers while also maintaining ethical standards that are applied fairly across employees.

Here is the opening:

Allegations of unethical conduct perpetrated by prominent, high-performing professionals have been exploding across newsfeeds (Zacharek et al., 2017). From customer service employees and their managers (e.g., Wells Fargo fake accounts; Levitt & Schoenberg, 2020), to actors, producers, and politicians (e.g., long-term corruption of Belarus’ president; Simmons, 2020), to reporters and journalists (e.g., the National Broadcasting Company’s alleged cover-up; Farrow, 2019), to engineers and executives (e.g., Volkswagen’s emissions fraud; Vlasic, 2017), the public has been repeatedly shocked by the egregious behaviors committed by individuals recognized as high performers within their respective fields (Bennett, 2017). 

In the wake of such widespread unethical, corrupt, and exploitative behavior, many have wondered how supervisors could have systematically ignored the conduct of high-performing individuals for so long while they ascended organizational ladders. How could such misconduct have resulted in their advancement to leadership roles rather than stalled or derailed the transgressors’ careers?

The story of Carlos Ghosn at Nissan hints at why and when individuals’ unethical behavior (i.e., lying, cheating, and stealing; Treviño et al., 2006, 2014) may result in less punitive judgment (i.e., the extent to which observed behavior is morally evaluated as negative, incorrect, or inappropriate). During his 30-year career in the automotive industry, Ghosn differentiated himself as a high performer known for effective cost-cutting, strategic planning, and spearheading change; however, in 2018, he fell from grace over allegations of years of financial malfeasance and embezzlement (Leggett, 2019). When allegations broke, Nissan’s CEO stood firm in his punitive judgment that Ghosn’s behavior “cannot be tolerated by the company” (Kageyama, 2018). Still, many questioned why the executives levied judgment on the misconduct that they had overlooked for years. Tokyo bureau chief of the New York Times, Motoko Rich, reasoned that Ghosn “probably would have continued to get away with it … if the company was continuing to be successful. But it was starting to slow down. There were signs that the magic had gone” (Barbaro, 2019). Similarly, an executive pointed squarely to the relevance of Ghosn’s performance, lamenting: “what [had he] done for us lately?” (Chozick & Rich, 2018). As a high performer, Ghosn’s unethical behavior evaded punitive judgment and career consequences from Nissan executives, but their motivation to leniently judge Ghosn’s behavior seemed to wane with his level of performance. In her reporting, Rich observed: “you can get away with whatever you want as long as you’re successful. And once you’re not so successful anymore, then all that rule-breaking and brashness doesn’t look so attractive and appealing anymore” (Barbaro, 2019).

Saturday, May 27, 2023

Costly Distractions: Focusing on Individual Behavior Undermines Support for Systemic Reforms

Hagmann, D., Liao, Y., Chater, N., & 
Loewenstein, G. (2023, April 22). 


Policy challenges can typically be addressed both through systemic changes (e.g., taxes and mandates) and by encouraging individual behavior change. In this paper, we propose that, while in principle complementary, systemic and individual perspectives can compete for the limited attention of people and policymakers. Thus, directing policies in one of these two ways can distract the public’s attention from the other—an “attentional opportunity cost.” In two pre-registered experiments (n = 1,800) covering three high-stakes domains (climate change, retirement savings, and public health), we show that when people learn about policies targeting individual behavior (such as awareness campaigns), they are more likely to themselves propose policies that target individual behavior, and to hold individuals rather than organizational actors responsible for solving the problem, than are people who learned about systemic policies (such as taxes and mandates, Study 1). This shift in attribution of responsibility has behavioral consequences: people exposed to individual interventions are more likely to donate to an organization that educates individuals rather than one seeking to effect systemic reforms (Study 2). Policies targeting individual behavior may, therefore, have the unintended consequence of redirecting attention and attributions of responsibility away from systemic change to individual behavior.


Major policy problems likely require a realignment of systemic incentives and regulations, as well as measures aimed at individual behavior change. In practice, systemic reforms have been difficult to implement, in part due to political polarization and in part because concentrated interest groups have lobbied against changes that threaten their profits. This has shifted the focus to individual behavior. The past two decades, in particular, have seen increasing popularity of ‘nudges’: interventions that can influence individual behavior without substantially changing economic incentives (Thaler &Sunstein, 2008). For example, people may be defaulted into green energy plans (Sunstein &Reisch, 2013) or 401(k) contributions (Madrian & Shea, 2001), and restaurants may varywhether they place calorie labels on the left or the right side of the menu (Dallas, Liu, &Ubel, 2019). These interventions have enjoyed tremendous popularity, because they can often be implemented even when opposition to systemic reforms is too large to change economic incentives. Moreover, it has been argued that nudges incur low economic costs, making them extremely cost effective even when the gains are small on an absolute scaleTor & Klick (2022).

In this paper, we document an important and so far unacknowledged cost of such interventions targeting individual behavior, first postulated by Chater and Loewenstein(2022). We show that when people learn about interventions that target individual behavior, they shift their attention away from systemic reforms compared to those who learn about systemic reforms. Across two experiments, we find that this subsequently  affects their attitudes and behaviors. Specifically, they become less likely to propose systemic policy reforms, hold governments less responsible for solving the policy problem, and are less likely to support organizations that seek to promote systemic reform.The findings of this study may not be news to corporate PR specialists. Indeed, as would be expected according to standard political economy considerations (e.g., Stigler,1971), organizations act in a way that is consistent with a belief in this attentional opportunity cost account. Initiatives that have captured the public’s attention, including recycling campaigns and carbon footprint calculators, have been devised by the very organizations that stood to lose from further regulation that might have hurt their bottomline (e.g., bottle bills and carbon taxes, respectively), potentially distracting individual citizens, policymakers, and the wider public debate from systemic changes that are likely to be required to shift substantially away from the status quo.

Friday, May 26, 2023

A General Motivational Architecture for Human and Animal Personality

Del Giudice, M. (2022).
Neuroscience & Biobehavioral Reviews, 144, 104967.


To achieve integration in the study of personality, researchers need to model the motivational processes that give rise to stable individual differences in behavior, cognition, and emotion. The missing link in current approaches is a motivational architecture—a description of the core set of mechanisms that underlie motivation, plus a functional account of their operating logic and inter-relations. This paper presents the initial version of such an architecture, the General Architecture of Motivation (GAM). The GAM offers a common language for individual differences in humans and other animals, and a conceptual toolkit for building species-specific models of personality. The paper describes the main components of the GAM and their interplay, and examines the contribution of these components to the emergence of individual differences. The final section discusses how the GAM can be used to construct explicit functional models of personality, and presents a roadmap for future research.


To realize the dream of an integrated science of personality, researchers will have to move beyond structural descriptions and start building realistic functional models of individual differences. I believe that ground-up adaptationism guided by evolutionary theory is the way of the future (Lukaszewski, 2021); however, I also believe that the effort spent in teasing out the logic of specific mechanisms (e.g., the anger program; Lukaszewski et al., 2020; Sell et al., 2017) will not pay off in the domain of personality without the scaffolding of a broader theory of motivation—and an architectural framework to link the mechanisms together and explain their dynamic interplay.

In this paper, I have built on previous contributions to present the initial version of the GAM, a general motivational architecture that can be adapted to fit a broad range of animal species. The framework of the GAM should make it easier to integrate theoretical and empirical results from a variety of research areas, develop functional models of personality, and—not least—compare the personality of different species based on explicit functional principles (e.g., different sets of motivational systems, differences in activation/deactivation parameters), thus overcoming the limitations of standard factor-analytic descriptions. As I noted in the introduction, the GAM is intended as a work in progress, open to integrations and revisions. I hope this proposal will stimulate the curiosity of other scholars and spark the kind of creative, integrative work that can bring the science of personality to its well-deserved maturity.

Thursday, May 25, 2023

Unselfish traits and social decision-making patterns characterize six populations of real-world extraordinary altruists

Rhoads, S. A., Vekaria, K. M. et al. (2023). 
Nature Communications
Published online 31 March 23


Acts of extraordinary, costly altruism, in which significant risks or costs are assumed to benefit strangers, have long represented a motivational puzzle. But the features that consistently distinguish individuals who engage in such acts have not been identified. We assess six groups of real-world extraordinary altruists who had performed costly or risky and normatively rare (<0.00005% per capita) altruistic acts: heroic rescues, non-directed and directed kidney donations, liver donations, marrow or hematopoietic stem cell donations, and humanitarian aid work. Here, we show that the features that best distinguish altruists from controls are traits and decision-making patterns indicating unusually high valuation of others’ outcomes: high Honesty-Humility, reduced Social Discounting, and reduced Personal Distress. Two independent samples of adults who were asked what traits would characterize altruists failed to predict this pattern. These findings suggest that theories regarding self- focused motivations for altruism (e.g., self-enhancing reciprocity, reputation enhancement) alone are insufficient explanations for acts of real-world self- sacrifice.

From the Discussion Section

That extraordinary altruists are consistently distinguished by a common set of traits linked to unselfishness is particularly noteworthy given the differences in the demographics of the various altruistic groups we sampled and the differences in the forms of altruism they have engaged in—from acts of physical heroism to the decision to donate bone marrow. This finding replicates and extends findings from a previous study demonstrating that extraordinary altruists show heighted subjective valuation of socially distant others. In addition, our results are consistent with a recent meta-analysis of 770 studies finding a strong and consistent relationship between Honesty-Humility and various forms of self-reported and laboratory-measured prosociality. Coupled with findings that low levels of unselfish traits (e.g., low Honesty-Humility, high social discounting) correspond to exploitative and antisocial behaviors such as cheating and aggression, these results also lend support to the notion of a bipolar caring continuum along which individuals vary in the degree to which they subjectively value (care about) the welfare of others. They further suggest altruism—arguably the willingness to be voluntarily “exploited” by others—to be the opposite of phenotypes like psychopathy that are characterized by exploiting others. These traits may best predict behavior in novel contexts lacking strong norms, particularly when decisions are made rapidly and intuitively. Notably, people who are higher in prosociality are more likely to participate in psychological research to begin with—thus the observed differences between altruists and controls may be underestimates (i.e., population-level differences may be larger).

Wednesday, May 24, 2023

Fighting for our cognitive liberty

Liz Mineo
The Harvard Gazette
Originally published 26 April 23

Imagine going to work and having your employer monitor your brainwaves to see whether you’re mentally tired or fully engaged in filling out that spreadsheet on April sales.

Nita Farahany, professor of law and philosophy at Duke Law School and author of “The Battle for Your Brain: Defending the Right to Think Freely in the Age of Neurotechnology,” says it’s already happening, and we all should be worried about it.

Farahany highlighted the promise and risks of neurotechnology in a conversation with Francis X. Shen, an associate professor in the Harvard Medical School Center for Bioethics and the MGH Department of Psychiatry, and an affiliated professor at Harvard Law School. The Monday webinar was co-sponsored by the Harvard Medical School Center for Bioethics, the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School, and the Dana Foundation.

Farahany said the practice of tracking workers’ brains, once exclusively the stuff of science fiction, follows the natural evolution of personal technology, which has normalized the use of wearable devices that chronicle heartbeats, footsteps, and body temperatures. Sensors capable of detecting and decoding brain activity already have been embedded into everyday devices such as earbuds, headphones, watches, and wearable tattoos.

“Commodification of brain data has already begun,” she said. “Brain sensors are already being sold worldwide. It isn’t everybody who’s using them yet. When it becomes an everyday part of our everyday lives, that’s the moment at which you hope that the safeguards are already in place. That’s why I think now is the right moment to do so.”

Safeguards to protect people’s freedom of thought, privacy, and self-determination should be implemented now, said Farahany. Five thousand companies around the world are using SmartCap technologies to track workers’ fatigue levels, and many other companies are using other technologies to track focus, engagement and boredom in the workplace.

If protections are put in place, said Farahany, the story with neurotechnology could be different than the one Shoshana Zuboff warns of in her 2019 book, “The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power.” In it Zuboff, Charles Edward Wilson Professor Emerita at Harvard Business School, examines the threat of the widescale corporate commodification of personal data in which predictions of our consumer activities are bought, sold, and used to modify behavior.

Tuesday, May 23, 2023

Machine learning uncovers the most robust self-report predictors of relationship quality across 43 longitudinal couples studies

Joel, S., Eastwick, P. W., et al. (2020).
PNAS of the United States of America,
117(32), 19061–19071.


Given the powerful implications of relationship quality for health and well-being, a central mission of relationship science is explaining why some romantic relationships thrive more than others. This large-scale project used machine learning (i.e., Random Forests) to 1) quantify the extent to which relationship quality is predictable and 2) identify which constructs reliably predict relationship quality. Across 43 dyadic longitudinal datasets from 29 laboratories, the top relationship-specific predictors of relationship quality were perceived-partner commitment, appreciation, sexual satisfaction, perceived-partner satisfaction, and conflict. The top individual-difference predictors were life satisfaction, negative affect, depression, attachment avoidance, and attachment anxiety. Overall, relationship-specific variables predicted up to 45% of variance at baseline, and up to 18% of variance at the end of each study. Individual differences also performed well (21% and 12%, respectively). Actor-reported variables (i.e., own relationship-specific and individual-difference variables) predicted two to four times more variance than partner-reported variables (i.e., the partner’s ratings on those variables). Importantly, individual differences and partner reports had no predictive effects beyond actor-reported relationship-specific variables alone. These findings imply that the sum of all individual differences and partner experiences exert their influence on relationship quality via a person’s own relationship-specific experiences, and effects due to moderation by individual differences and moderation by partner-reports may be quite small. Finally, relationship-quality change (i.e., increases or decreases in relationship quality over the course of a study) was largely unpredictable from any combination of self-report variables. This collective effort should guide future models of relationships.


What predicts how happy people are with their romantic relationships? Relationship science—an interdisciplinary field spanning psychology, sociology, economics, family studies, and communication—has identified hundreds of variables that purportedly shape romantic relationship quality. The current project used machine learning to directly quantify and compare the predictive power of many such variables among 11,196 romantic couples. People’s own judgments about the relationship itself—such as how satisfied and committed they perceived their partners to be, and how appreciative they felt toward their partners—explained approximately 45% of their current satisfaction. The partner’s judgments did not add information, nor did either person’s personalities or traits. Furthermore, none of these variables could predict whose relationship quality would increase versus decrease over time.


From a public interest standpoint, this study provides provisional answers to the perennial question “What predicts how satisfied and committed I will be with my relationship partner?” Experiencing negative affect, depression, or insecure attachment are surely relationship risk factors. But if people nevertheless manage to establish a relationship characterized by appreciation, sexual satisfaction, and a lack of conflict—and they perceive their partner to be committed and responsive—those individual risk factors may matter little. That is, relationship quality is predictable from a variety of constructs, but some matter more than others, and the most proximal predictors are features that characterize a person’s perception of the relationship itself.

Monday, May 22, 2023

New evaluation guidelines for dementia

The Monitor on Psychology
Vol. 54, No. 3
Print Version: Page 40

Updated APA guidelines are now available to help psychologists evaluate patients with dementia and their caregivers with accuracy and sensitivity and learn about the latest developments in dementia science and practice.

APA Guidelines for the Evaluation of Dementia and Age-Related Cognitive Change (PDF, 992KB) was released in 2021 and reflects updates in the field since the last set of guidelines, released in 2011, said geropsychologist and University of Louisville professor Benjamin T. Mast, PhD, ABPP, who chaired the task force that produced the guidelines.

“These guidelines aspire to help psychologists gain not only a high level of technical expertise in understanding the latest science and procedures for evaluating dementia,” he said, “but also have a high level of sensitivity and empathy for those undergoing a life change that can be quite challenging.”

Major updates since 2011 include:

Discussion of new DSM terminology. The new guidelines discuss changes in dementia diagnosis and diagnostic criteria reflected in the most recent version of the Diagnostic and Statistical Manual of Mental Disorders (Fifth Edition). In particular, the DSM-5 changed the term “dementia” to “major neurocognitive disorder,” and “mild cognitive impairment” to “minor neurocognitive disorder.” As was true with earlier nomenclature, providers and others amend these terms depending on the cause or causes of the disorder, for example, “major neurocognitive disorder due to traumatic brain injury.” That said, the terms “dementia” and “mild cognitive impairment” are still widely used in medicine and mental health care.

Discussion of new research guidelines. The new guidelines also discuss research advances in the field, in particular the use of biomarkers to detect various forms of dementia. Examples are the use of amyloid imaging—PET scans with a radio tracer that selectively binds to amyloid plaques—and analysis of amyloid and tau in cerebrospinal fluid. While these techniques are still mainly used in major academic medical centers, it is important for clinicians to know about them because they may eventually be used in clinical practice, said Bonnie Sachs, PhD, ABPP, an associate professor and neuropsychologist at Wake Forest University School of Medicine. “These developments change the way we think about things like Alzheimer’s disease, because they show there is a long preclinical asymptomatic phase before people start to show memory problems,” she said.

Sunday, May 21, 2023

Artificial intelligence, superefficiency and the end of work: a humanistic perspective on meaning in life

Knell, S., Rüther, M.
AI Ethics (2023).


How would it be assessed from an ethical point of view if human wage work were replaced by artificially intelligent systems (AI) in the course of an automation process? An answer to this question has been discussed above all under the aspects of individual well-being and social justice. Although these perspectives are important, in this article, we approach the question from a different perspective: that of leading a meaningful life, as understood in analytical ethics on the basis of the so-called meaning-in-life debate. Our thesis here is that a life without wage work loses specific sources of meaning, but can still be sufficiently meaningful in certain other ways. Our starting point is John Danaher’s claim that ubiquitous automation inevitably leads to an achievement gap. Although we share this diagnosis, we reject his provocative solution according to which game-like virtual realities could be an adequate substitute source of meaning. Subsequently, we outline our own systematic alternative which we regard as a decidedly humanistic perspective. It focuses both on different kinds of social work and on rather passive forms of being related to meaningful contents. Finally, we go into the limits and unresolved points of our argumentation as part of an outlook, but we also try to defend its fundamental persuasiveness against a potential objection.

From Concluding remarks

In this article, we explored the question of how we can find meaning in a post-work world. Our answer relies on a critique of John Danaher’s utopia of games and tries to stick to the humanistic idea, namely to the idea that we do not have to alter our human lifeform in an extensive way and also can keep up our orientation towards common ideals, such as working towards the good, the true and the beautiful.

Our proposal still has some shortcomings, which include the following two that we cannot deal with extensively but at least want to briefly comment on. First, we assumed that certain professional fields, especially in the meaning conferring area of the good, cannot be automated, so that the possibility of mini-jobs in these areas can be considered. This assumption is based on a substantial thesis from the philosophy of mind, namely that AI systems cannot develop consciousness and consequently also no genuine empathy. This assumption needs to be further elaborated, especially in view of some forecasts that even the altruistic and philanthropic professions are not immune to the automation of superefficient systems. Second, we have adopted without further critical discussion the premise of the hybrid standard model of a meaningful life according to which meaning conferring objective value is to be found in the three spheres of the true, the good, and the beautiful. We take this premise to be intuitively appealing, but a further elaboration of our argumentation would have to try to figure out, whether this trias is really exhaustive, and if so, due to which underlying more general principle. Third, the receptive side of finding meaning in the realm of the true and beautiful was emphasized and opposed to the active striving towards meaningful aims. Here, we have to more precisely clarify what axiological status reception has in contrast to active production—whether it is possibly meaning conferring to a comparable extent or whether it is actually just a less meaningful form. This is particularly important to be able to better assess the appeal of our proposal, which depends heavily on the attractiveness of the vita contemplativa.

Saturday, May 20, 2023

ChatGPT Answers Beat Physicians' on Info, Patient Empathy, Study Finds

Michael DePeau-Wilson
MedPage Today
Originally published 28 April 23

The artificial intelligence (AI) chatbot ChatGPT outperformed physicians when answering patient questions, based on quality of response and empathy, according to a cross-sectional study.

Of 195 exchanges, evaluators preferred ChatGPT responses to physician responses in 78.6% (95% CI 75.0-81.8) of the 585 evaluations, reported John Ayers, PhD, MA, of the Qualcomm Institute at the University of California San Diego in La Jolla, and co-authors.

The AI chatbot responses were given a significantly higher quality rating than physician responses (t=13.3, P<0.001), with the proportion of responses rated as good or very good quality (≥4) higher for ChatGPT (78.5%) than physicians (22.1%), amounting to a 3.6 times higher prevalence of good or very good quality responses for the chatbot, they noted in JAMA Internal Medicine in a new tab or window.

Furthermore, ChatGPT's responses were rated as being significantly more empathetic than physician responses (t=18.9, P<0.001), with the proportion of responses rated as empathetic or very empathetic (≥4) higher for ChatGPT (45.1%) than for physicians (4.6%), amounting to a 9.8 times higher prevalence of empathetic or very empathetic responses for the chatbot.

"ChatGPT provides a better answer," Ayers told MedPage Today. "I think of our study as a phase zero study, and it clearly shows that ChatGPT wins in a landslide compared to physicians, and I wouldn't say we expected that at all."

He said they were trying to figure out how ChatGPT, developed by OpenAI, could potentially help resolve the burden of answering patient messages for physicians, which he noted is a well-documented contributor to burnout.

Ayers said that he approached this study with his focus on another population as well, pointing out that the burnout crisis might be affecting roughly 1.1 million providers across the U.S., but it is also affecting about 329 million patients who are engaging with overburdened healthcare professionals.


"Physicians will need to learn how to integrate these tools into clinical practice, defining clear boundaries between full, supervised, and proscribed autonomy," he added. "And yet, I am cautiously optimistic about a future of improved healthcare system efficiency, better patient outcomes, and reduced burnout."

After seeing the results of this study, Ayers thinks that the research community should be working on randomized controlled trials to study the effects of AI messaging, so that the future development of AI models will be able to account for patient outcomes.

Friday, May 19, 2023

What’s wrong with virtue signaling?

Hill, J., Fanciullo, J. 
Synthese 201, 117 (2023).


A novel account of virtue signaling and what makes it bad has recently been offered by Justin Tosi and Brandon Warmke. Despite plausibly vindicating the folk?s conception of virtue signaling as a bad thing, their account has recently been attacked by both Neil Levy and Evan Westra. According to Levy and Westra, virtue signaling actually supports the aims and progress of public moral discourse. In this paper, we rebut these recent defenses of virtue signaling. We suggest that virtue signaling only supports the aims of public moral discourse to the extent it is an instance of a more general phenomenon that we call norm signaling. We then argue that, if anything, virtue signaling will undermine the quality of public moral discourse by undermining the evidence we typically rely on from the testimony and norm signaling of others. Thus, we conclude, not only is virtue signaling not needed, but its epistemological effects warrant its bad reputation.


In this paper, we have challenged two recent defenses of virtue signaling. Whereas Levy ascribes a number of good features to virtue signaling—its providing higher-order evidence for the truth of certain moral judgments, its helping us delineate groups of reliable moral cooperators, and its not involving any hypocrisy on the part of its subject—it seems these good features are ascribable to virtue signaling ultimately and only because they are good features of norm signaling, and virtue signaling entails norm signaling. Similarly, whereas Westra suggests that virtue signaling uniquely benefits public moral discourse by supporting moral progress in a way that mere norm signaling does not, it seems virtue signaling also uniquely harms public moral discourse by supporting moral regression in a way that mere norm signaling does not. It therefore seems that in each case, to the extent it differs from norm signaling, virtue signaling simply isn’t needed.

Moreover, we have suggested that, if anything, virtue signaling will undermine the higher order evidence we typically can and should rely on from the testimony of others. Virtue signaling essentially involves a motivation that aims at affecting public moral discourse but that does not aim at the truth. When virtue signaling is rampant—when we are aware that this ulterior motive is common among our peers—we should give less weight to the higher-order evidence provided by the testimony of others than we otherwise would, on pain of double counting evidence and falling for unwarranted confidence. We conclude, therefore, that not only is virtue signaling not needed, but its epistemological effects warrant its bad reputation. 

Thursday, May 18, 2023

People Construe a Corporation as an Individual to Ascribe Responsibility in Cases of Corporate Wrongdoing

Sharma, N., Flores-Robles, G., & Gantman, A. P.
(2023, April 11). PsyArXiv


In cases of corporate wrongdoing, it is difficult to assign blame across multiple agents who played different roles. We propose that people have dualist ideas of corporate hierarchies: with the boss as “the mind,” and the employee as “the body,” and the employee appears to carry out the will of the boss like the mind appears to will the body (Wegner, 2003). Consistent with this idea, three experiments showed that moral responsibility was significantly higher for the boss, unless the employee acted prior to, inconsistently with, or outside of the boss’s will. People even judge the actions of the employee as mechanistic (“like a billiard ball”) when their actions mirror the will of the boss. This suggests that the same features that tell us our minds cause our actions, also facilitate the sense that a boss has willed the behavior of an employee and is ultimately responsible for bad outcomes in the workplace.

From the General Discussion

Practical Implications

Our findings offer a number of practical implications for organizations. First, our research provides insight into how people currently make judgments of moral responsibility within an organization (and specifically, when a boss gives instructions to an employee). Second, our research provides insight into the decision-making process of whether to fire a boss-figure like a CEO (or other decision-maker) or invest in lasting change in organizational culture following an organizational wrongdoing. From a scapegoating perspective, replacing a CEO is not intended to produce lasting change in underlying organizational problems and signals a desire to maintain the status quo (Boeker, 1992; Shen & Cannella, 2002). Scapegoating may not always be in the best interest of investors. Previous research has shown that following financial misrepresentation, investors react positively only to CEO successions wherein the replacement comes from the outside, which serves as a costly signal of the firm’s understanding of the need for change (Gangloff et al., 2016). And so, by allocating responsibility to the CEO without creating meaningful change, organizations may loseinvestors. Finally, this research has implications for building public trust in organizations. Following the Wells Fargo scandal, two-thirds of Wells Fargo customers (65%) claimed they trusted their bank less, and about half of Wells Fargo customers (51%) were willing to switch to another bank, if they perceived them to be more trustworthy (Business Wire, 2017).Thus, how organizations deal with wrongdoing (e.g., whether they fire individuals, create lasting change or both) can influence public trust. If corporations want to build trust among the general public, and in doing so, create a larger customer base, they can look at how people understand and ascribe responsibility and consequently punish organizational wrongdoings.

Wednesday, May 17, 2023

In Search of an Ethical Constraint on Hospital Revenue

Lauren Taylor
The Hastings Center
Originally published 14 APR 23

Here are two excerpts:

A physician whistleblower came forward alleging that Detroit Medical Center, owned by for-profit Tenet Healthcare, refused to halt elective procedures in early days of the pandemic, even after dozens of patients and staff were exposed to a COVID-positive patient undergoing an organ transplant. According to the physician, Tenet persisted on account of the margin it stood to generate. “Continuing to do this [was] truly a crime against patients,” recalled Dr. Shakir Hussein, who was fired shortly thereafter.

Earlier in 2022, nonprofit Bon Secours health system was investigated for its strategic downsizing of a community hospital in Richmond, Va., which left a predominantly Black community lacking access to standard medical services such as MRIs and maternity care. Still, the hospital managed to turn a $100 million margin, which buoyed the system’s $1 billion net revenue in 2021. “Bon Secours was basically laundering money through this poor hospital to its wealthy outposts,” said one emergency department physician who had worked at Richmond Community Hospital. “It was all about profits.”  

The academic literature further substantiates concerns about hospital margin maximization. One paper examining the use of municipal, tax-exempt debt among nonprofit hospitals found evidence of arbitrage behavior, where hospitals issued debt not to invest in new capital (the stated purpose of most municipal debt issuances) but to invest the proceeds of the issuance in securities and other endowment accounts. A more recent paper, focused on private equity-owned hospitals, found that facilities acquired by private equity were more likely to “add specific, profitable hospital-based services and less likely to add or continue those with unreliable revenue streams.” These and other findings led Donald Berwick to write that greed poses an existential threat to U.S. health care.

None of the hospital actions described above are necessarily illegal but they certainly bring long-lurking issues within bioethics to the fore. Recognizing that hospitals are resource-dependent organizations, what normative, ethical responsibilities–or constraints–do they face with regard to revenue-generation? A review of the health services and bioethics literature to date turns up three general answers to this question, all of which are unsatisfactory.


In sum, we cannot rely on laws alone to provide an effective check on hospital revenue generation due to the law’s inevitably limited scope. We therefore must identify an internalized ethic to guide hospital revenue generation. The concept of an organizational mission is a weak check on nonprofit hospitals and virtually meaningless among for-profit hospitals, and reliance on professionalism is incongruous with the empirical data about who has final decision-making authority over hospitals today. We need a new way to conceptualize hospital responsibilities.

Two critiques of this idea merit confrontation. The first is that there is no urgent need for an internalized constraint on revenue generation because more than half of hospitals are currently operating in the red; seeking to curb their revenue further is counterproductive. But just because a proportion of this sector is in the red does not undercut the egregiousness of the hospital actions described earlier. Moreover, if hospitals are running a deficit in part because they choose not to undertake unethical action to generate revenue, then any rule developed saying they can’t undertake ethical actions to generate revenue won’t apply to them. The second critique is that the current revenues that hospitals generate are legitimate because they bolster institutional “rainy day funds” of sorts, which can be deployed to help people and communities in need at a future date. But with a declining national life expectancy, a Black maternal mortality rate hovering at roughly that of Tajikistan, and medical debt the leading cause of personal bankruptcy in the U.S. – it is already raining. Increasing reserves, by any means, can no longer be defended with this logic.

Tuesday, May 16, 2023

Approaches to Muslim Biomedical Ethics: A Classification and Critique

Dabbagh, H., Mirdamadi, S.Y. & Ajani, R.R.
Bioethical Inquiry (2023).


This paper provides a perspective on where contemporary Muslim responses to biomedical-ethical issues stand to date. There are several ways in which Muslim responses to biomedical ethics can and have been studied in academia. The responses are commonly divided along denominational lines or under the schools of jurisprudence. All such efforts classify the responses along the lines of communities of interpretation rather than the methods of interpretation. This research is interested in the latter. Thus, our criterion for classification is the underlying methodology behind the responses. The proposed classification divides Muslim biomedical-ethical reasoning into three methodological categories: 1) textual, 2) contextual, and 3) para-textual.


There is widespread recognition among Muslim scholars dealing with biomedical ethical issues that context plays an essential role in forming ethical principles and judgements. The context-sensitive approaches in Muslim biomedical ethics respond to the requirements of modern biomedical issues by recognizing the contexts in which scriptural text has been formed and developed through the course of Muslim intellectual history. This paves the way for bringing in different context-sensitive interpretations of the sacred texts through different reasoning tools and methods, whether they are rooted in the uṣūl al-fiqh tradition for the contextualists, or in moral philosophy for the para-textualists. For the textualists, reasoning outside of the textual boundaries is not acceptable. While contextualists tend to believe that contextual considerations make sense only in light of Sharīʿa law and should not be understood independently of Sharīʿa law, para-textualists believe that moral perceptions and contextual considerations are valid irrespective of Sharīʿa law, insofar as they do not neglect the moral vision of the scriptures. The common ground between the majority of the textualists and the contextualists lies in giving primacy to the Sharīʿa law. Moral requirements for both the textualists and the contextualists are only determined by Sharīʿa commandments, and Sharīʿa commandments are the only basis on which to decide what is morally permissible or impermissible in biomedical ethical issues. This is an Ashʿarī-inspired approach to biomedical ethics with respect to human moral reasoning (Sachedina 2005; Aramesh 2020; Reinhart 2004; Moosa 2004; Moosapour et al. 2018).

Para-textualists, on the other hand, do not deny the relevance of Sharīʿa, but treat the reasoning embedded in Sharīʿa as being on a par with moral reasoning in general. Thus, if there are contending strands of moral reasoning on a particular biomedical ethical issue, Sharīʿa-based reasoning will need to compete with other moral reasoning on the issue. If the aḥkām (religious judgements) are deemed to be reasonably sound, then for para-textualists there are no grounds for not accepting them. Although using and referring to Sharīʿa might work in many cases, it is not the case that Sharīʿa is enough in every case to judge on moral issues. For instance, morally speaking, it is not enough to refer to Sharīʿa when someone is choosing or refusing euthansia or abortion. For para-textualists what matters most is how Sharīʿa morally reasons about the permissibility or impermissibility of an action. If it is morally justified to euthanize or abort, we are rationally (and morally) bound to accept it, and if it is not morally justified, we will then either have to leave our judgement about choosing or refusing euthanasia or abortion or find another context-sensitive interpretation to rationalize the relevant commandment derived from Sharīʿa. Thus, the departure point for the para-textualist approach is moral reasoning, whether it is found in moral philosophy, Muslim jurisprudence, or elsewhere (Soroush 2009; Shahrur 1990, 2009; Hallaq 1997; An-Na’im 2008). Para-textualist methodology tries to remain open to the possibility of morally criticizing religious judgements (aḥkām), while remaining true to the moral vision of the scriptures. This is a Muʿtazilī-inspired approach to biomedical ethics (Hourani 1976; Vasalou 2008; Sheikh 2019; Farahat 2019; Reinhart 1995; Al-Bar and Chamsi-Pasha 2015; Hallaq 2014).

Monday, May 15, 2023

The Folk Concept of the Good Life: Neither Happiness nor Well-Being

Kneer, M., & Haybron, D. M. (2023).


The concept of a good life is usually assumed by philosophers to be equivalent to that of well-being, or perhaps of a morally good life, and hence has received little attention as a potentially distinct subject matter.  In a series of experiments participants were presented with vignettes involving socially sanctioned wrongdoing toward outgroup members.  Findings indicated that, for a large majority, judgments of bad character strongly reduce ascriptions of the good life, while having no impact at all on ascriptions of happiness or well-being. Taken together with earlier findings these results suggest that the lay concept of a good life is clearly distinct from those of happiness, well-being, or morality, likely encompassing both morality and well-being, and perhaps other values as well: whatever matters in a person’s life. Importantly, morality appears not to play a fundamental role in either happiness or well-being among the folk. 

General Discussion

Our studies yielded two main results of note. First, a person’s moral qualities appear to have no direct bearing on ordinary assessments of happiness and well-being among the great majority of individuals.  This finding is consistent with an earlier study involving similar vignettes focusing just on happiness
ascriptions (Kneer and Haybron 2023). These studies suggest that among the folk, these studies suggest that the ancient and much-debated idea that happiness or well-being requires moral virtue seems to hold little currency: a bad person can perfectly well be happy and do just fine.

This of course does not settle the philosophical debate, as the folk may be wrong, or further studies may reveal that these results do not generalize, or apply only among American English-speaking populations. But it does suggest that philosophers following Plato in claiming that serious immorality precludes flourishing are defending a less-than-intuitive position, despite the widespread use of intuition pumps in this literature.

Why might many philosophers’ intuitions, and earlier research on the influence of morality on happiness ascriptions, have pointed to a different verdict? As the current paper focuses primarily on a different question, the concept of a good life, we refer the reader to (Kneer and Haybron 2023) for more extensive discussion of the differences between our findings and those of Phillips et al.

But one possibility is that the claims in question rest on the intuitions of a small but significant minority—roughly a quarter—whose judgments of happiness and well-being showed some impact of morality. But even among this group our studies here and in previous work found a modest impact of morality compared to the very strong philosophical claims at issue: not just that morality exacts some toll on the wrongdoer, but that such a person cannot do well at all. Indeed, establishing the latter claim is essentially the point of
Plato’s Republic.

Sunday, May 14, 2023

Consciousness begins with feeling, not thinking

A. Damasio & H. Dimasio
Originally posted 20 APR 23

Please pause for a moment and notice what you are feeling now. Perhaps you notice a growing snarl of hunger in your stomach or a hum of stress in your chest. Perhaps you have a feeling of ease and expansiveness, or the tingling anticipation of a pleasure soon to come. Or perhaps you simply have a sense that you exist. Hunger and thirst, pain, pleasure and distress, along with the unadorned but relentless feelings of existence, are all examples of ‘homeostatic feelings’. Homeostatic feelings are, we argue here, the source of consciousness.

In effect, feelings are the mental translation of processes occurring in your body as it strives to balance its many systems, achieve homeostasis, and keep you alive. In a conventional sense feelings are part of the mind and yet they offer something extra to the mental processes. Feelings carry spontaneously conscious knowledge concerning the current state of the organism as a result of which you can act to save your life, such as when you respond to pain or thirst appropriately. The continued presence of feelings provides a continued perspective over the ongoing body processes; the presence of feelings lets the mind experience the life process along with other contents present in your mind, namely, the relentless perceptions that collect knowledge about the world along with reasonings, calculations, moral judgments, and the translation of all these contents in language form. By providing the mind with a ‘felt point of view’, feelings generate an ‘experiencer’, usually known as a self. The great mystery of consciousness in fact is the mystery behind the biological construction of this experiencer-self.

In sum, we propose that consciousness is the result of the continued presence of homeostatic feelings. We continuously experience feelings of one kind or another, and feelings naturally tell each of us, automatically, not only that we exist but that we exist in a physical body, vulnerable to discomfort yet open to countless pleasures as well. Feelings such as pain or pleasure provide you with consciousness, directly; they provide transparent knowledge about you. They tell you, in no uncertain terms, that you exist and where you exist, and point to what you need to do to continue existing – for example, treating pain or taking advantage of the well-being that came your way. Feelings illuminate all the other contents of mind with the light of consciousness, both the plain events and the sublime ideas. Thanks to feelings, consciousness fuses the body and mind processes and gives our selves a home inside that partnership.

That consciousness should come ‘down’ to feelings may surprise those who have been led to associate consciousness with the lofty top of the physiological heap. Feelings have been considered inferior to reason for so long that the idea that they are not only the noble beginning of sentient life but an important governor of life’s proceedings may be difficult to accept. Still, feelings and the consciousness they beget are largely about the simple but essential beginnings of sentient life, a life that is not merely lived but knows that it is being lived.

Saturday, May 13, 2023

Doctors are drowning in paperwork. Some companies claim AI can help

Geoff Brumfiel
NPR.org - Health Shots
Originally posted 5 APR 23

Here are two excerpts:

But Paul kept getting pinged from younger doctors and medical students. They were using ChatGPT, and saying it was pretty good at answering clinical questions. Then the users of his software started asking about it.

In general, doctors should not be using ChatGPT by itself to practice medicine, warns Marc Succi, a doctor at Massachusetts General Hospital who has conducted evaluations of how the chatbot performs at diagnosing patients. When presented with hypothetical cases, he says, ChatGPT could produce a correct diagnosis accurately at close to the level of a third- or fourth-year medical student. Still, he adds, the program can also hallucinate findings and fabricate sources.

"I would express considerable caution using this in a clinical scenario for any reason, at the current stage," he says.

But Paul believed the underlying technology can be turned into a powerful engine for medicine. Paul and his colleagues have created a program called "Glass AI" based off of ChatGPT. A doctor tells the Glass AI chatbot about a patient, and it can suggest a list of possible diagnoses and a treatment plan. Rather than working from the raw ChatGPT information base, the Glass AI system uses a virtual medical textbook written by humans as its main source of facts – something Paul says makes the system safer and more reliable.


Nabla, which he co-founded, is now testing a system that can, in real time, listen to a conversation between a doctor and a patient and provide a summary of what the two said to one another. Doctors inform their patients that the system is being used in advance, and as a privacy measure, it doesn't actually record the conversation.

"It shows a report, and then the doctor will validate with one click, and 99% of the time it's right and it works," he says.

The summary can be uploaded to a hospital records system, saving the doctor valuable time.

Other companies are pursuing a similar approach. In late March, Nuance Communications, a subsidiary of Microsoft, announced that it would be rolling out its own AI service designed to streamline note-taking using the latest version of ChatGPT, GPT-4. The company says it will showcase its software later this month.

Friday, May 12, 2023

‘Mind-reading’ AI: Japan study sparks ethical debate

David McElhinney
Originally posted 7 APR 203

Yu Takagi could not believe his eyes. Sitting alone at his desk on a Saturday afternoon in September, he watched in awe as artificial intelligence decoded a subject’s brain activity to create images of what he was seeing on a screen.

“I still remember when I saw the first [AI-generated] images,” Takagi, a 34-year-old neuroscientist and assistant professor at Osaka University, told Al Jazeera.

“I went into the bathroom and looked at myself in the mirror and saw my face, and thought, ‘Okay, that’s normal. Maybe I’m not going crazy'”.

Takagi and his team used Stable Diffusion (SD), a deep learning AI model developed in Germany in 2022, to analyse the brain scans of test subjects shown up to 10,000 images while inside an MRI machine.

After Takagi and his research partner Shinji Nishimoto built a simple model to “translate” brain activity into a readable format, Stable Diffusion was able to generate high-fidelity images that bore an uncanny resemblance to the originals.

The AI could do this despite not being shown the pictures in advance or trained in any way to manufacture the results.

“We really didn’t expect this kind of result,” Takagi said.

Takagi stressed that the breakthrough does not, at this point, represent mind-reading – the AI can only produce images a person has viewed.

“This is not mind-reading,” Takagi said. “Unfortunately there are many misunderstandings with our research.”

“We can’t decode imaginations or dreams; we think this is too optimistic. But, of course, there is potential in the future.”

Note: If AI systems can decode human thoughts, it could infringe upon people's privacy and autonomy. There are concerns that this technology could be used for invasive surveillance or to manipulate people's thoughts and behavior. Additionally, there are concerns about how this technology could be used in legal proceedings and whether it violates human rights.

Thursday, May 11, 2023

Reputational Rationality Theory

Dorison, C. (2023, March 29). 


Traditionally, research on human judgment and decision making draws on cognitive psychology to identify deviations from normative standards of how decisions ought to be made. These deviations are commonly considered irrational errors and biases. However, this approach has serious limitations. Critically, even though most decisions are embedded within complex social networks of observers, this approach typically ignores how decisions are perceived by valued audiences. To address this limitation, this article proposes reputational rationality theory: a theoretical model of how observers evaluate targets who do (vs. do not) strictly adhere to normative standards of judgment and choice. Drawing on the dual pathways of homophily and social signaling, the theory generates testable predictions regarding when and why observers positively evaluate error-prone decision makers, termed the benefit of bias hypothesis. Given that individuals hold deep impression management goals, reputational rationality theory challenges the unqualified classification of response tendencies that deviate from normative standards as irrational. That is, apparent errors and biases can, under certain conditions, be reputationally rational. The reputational rewards associated with cognitive biases may in turn contribute to their persistence. Acknowledging the (sometimes beneficial) reputational consequences of cognitive biases can address long-standing puzzles in judgment and decision making as well as generate fruitful avenues for future research.


Reputational rationality theory inverts this relationship. Reputational rationality theory is primarily concerned with the observer rather than the target.It thus yields novel predictions regarding how observers evaluate targets (rather than how targets shift behavior due to pressures from observers). Reputational rationality theory is inherently a social cognition model, concerned with—for example—how the public evaluates the politician or how the CEO evaluates the employee. The theory suggests that several influential errors and biases—not taking the value-maximizing risk or investing in the worthwhile venture—can serve functional goals once reputational consequences are considered.

As summarized above, prior cognitive and social approaches to judgment and decision making have traditionally omitted empirical investigation of how judgments and decisions are perceived by valued audiences—such as the public or coworkers in the examples above. How concerning is this omission? On the one hand, this omission may be tolerable—if not ignorable—if reputational incentives align with goals that are traditionally considered in this work (e.g., accuracy, optimization, adherence to logic and statistics). Simply put, researchers could safely ignore reputational consequences if such consequences already reinforce conventional wisdom and standard recommendations for what it means to make a “good” decision. If observers penalize targets who recklessly display overconfidence or who flippantly switch their risk preferences based on decision frames, then examining these reputational consequences becomes less necessary, and the omission thus less severe. On the other hand, this omission may be relatively more severe if reputational incentives regularly conflict with traditional measures or undermine standard recommendations.



The challenges currently facing society are daunting. The planet is heating at an alarming pace. A growing number of countries hold nuclear weapons capable of killing millions in mere minutes. Democratic institutions in many countries, including the United States, appear weaker than previously thought. Confronting such challenges requires global leaders and citizens alike to make sound judgments and decisions within complex environments: to effectively navigate risk under conditions of widespread uncertainty; to pivot from failing paths to new opportunities; to properly calibrate their confidence among multiple possible futures. But is human rationality up to the task?

Building on a traditional cognitive and social approaches to human judgment and decision making, reputational rationality theory casts doubt on traditional normative classifications of errors and biases based on individual-level cognition, while simultaneously generating testable predictions for future research taking a broader social/institutional perspective. By examining both the reputational causes and consequences of human judgment and decision making, researchers can gain increased understanding not only into how judgments and decisions are made, but also how behavior can be changed—for good.

Wednesday, May 10, 2023

Foundation Models are exciting, but they should not disrupt the foundations of caring

Morley, Jessica and Floridi, Luciano
(April 20, 2023).


The arrival of Foundation Models in general, and Large Language Models (LLMs) in particular, capable of ‘passing’ medical qualification exams at or above a human level, has sparked a new wave of ‘the chatbot will see you now’ hype. It is exciting to witness such impressive technological progress, and LLMs have the potential to benefit healthcare systems, providers, and patients. However, these benefits are unlikely to be realised by propagating the myth that, just because LLMs are sometimes capable of passing medical exams, they will ever be capable of supplanting any of the main diagnostic, prognostic, or treatment tasks of a human clinician. Contrary to popular discourse, LLMs are not necessarily more efficient, objective, or accurate than human healthcare providers. They are vulnerable to errors in underlying ‘training’ data and prone to ‘hallucinating’ false information rather than facts. Moreover, there are nuanced, qualitative, or less measurable reasons why it is prudent to be mindful of hyperbolic claims regarding the transformative power ofLLMs. Here we discuss these reasons, including contextualisation, empowerment, learned intermediaries, manipulation, and empathy. We conclude that overstating the current potential of LLMs does a disservice to the complexity of healthcare and the skills of healthcare practitioners and risks a ‘costly’ new AI winter. A balanced discussion recognising the potential benefits and limitations can help avoid this outcome.


The technical feats achieved by foundation models in the last five years, and especially in the last six months, are undeniably impressive. Also undeniable is the fact that most healthcare systems across the world are under considerable strain. It is right, therefore, to recognise and invest in the potentially transformative power of models such as Med-PaLM and ChatGPT – healthcare systems will almost certainly benefit.  However, overstating their current potential does a disservice to the complexity of healthcare and the skills required of healthcare practitioners. Not only does this ‘hype’ risk direct patient and societal harm, but it also risks re-creating the conditions of previous AI winters when investors and enthusiasts became discouraged by technological developments that over-promised and under-delivered. This could be the most harmful outcome of all, resulting in significant opportunity costs and missed chances to benefit transform healthcare and benefit patients in smaller, but more positively impactful, ways. A balanced approach recognising the potential benefits and limitations can help avoid this outcome.