Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Wednesday, May 31, 2023

Can AI language models replace human participants?

Dillon, D, Tandon, N., Gu, Y., & Gray, K.
Trends in Cognitive Sciences
May 10, 2023

Abstract

Recent work suggests that language models such as GPT can make human-like judgments across a number of domains. We explore whether and when language models might replace human participants in psychological science. We review nascent research, provide a theoretical model, and outline caveats of using AI as a participant.

(cut)

Does GPT make human-like judgments?

We initially doubted the ability of LLMs to capture human judgments but, as we detail in Box 1, the moral judgments of GPT-3.5 were extremely well aligned with human moral judgments in our analysis (r= 0.95;
full details at https://nikett.github.io/gpt-as-participant). Human morality is often argued to be especially difficult for language models to capture and yet we found powerful alignment between GPT-3.5 and human judgments.

We emphasize that this finding is just one anecdote and we do not make any strong claims about the extent to which LLMs make human-like judgments, moral or otherwise. Language models also might be especially good at predicting moral judgments because moral judgments heavily hinge on the structural features of scenarios, including the presence of an intentional agent, the causation of damage, and a vulnerable victim, features that language models may have an easy time detecting.  However, the results are intriguing.

Other researchers have empirically demonstrated GPT-3’s ability to simulate human participants in domains beyond moral judgments, including predicting voting choices, replicating behavior in economic games, and displaying human-like problem solving and heuristic judgments on scenarios from cognitive
psychology. LLM studies have also replicated classic social science findings including the Ultimatum Game and the Milgram experiment. One company (http://syntheticusers.com) is expanding on these
findings, building infrastructure to replace human participants and offering ‘synthetic AI participants’
for studies.

(cut)

From Caveats and looking ahead

Language models may be far from human, but they are trained on a tremendous corpus of human expression and thus they could help us learn about human judgments. We encourage scientists to compare simulated language model data with human data to see how aligned they are across different domains and populations.  Just as language models like GPT may help to give insight into human judgments, comparing LLMs with human judgments can teach us about the machine minds of LLMs; for example, shedding light on their ethical decision making.

Lurking under the specific concerns about the usefulness of AI language models as participants is an age-old question: can AI ever be human enough to replace humans? On the one hand, critics might argue that AI participants lack the rationality of humans, making judgments that are odd, unreliable, or biased. On the other hand, humans are odd, unreliable, and biased – and other critics might argue that AI is just too sensible, reliable, and impartial.  What is the right mix of rational and irrational to best capture a human participant?  Perhaps we should ask a big sample of human participants to answer that question. We could also ask GPT.

Tuesday, May 30, 2023

Are We Ready for AI to Raise the Dead?

Jack Holmes
Esquire Magazine
Originally posted 4 May 24

Here is an excerpt:

You can see wonderful possibilities here. Some might find comfort in hearing their mom’s voice, particularly if she sounds like she really sounded and gives the kind of advice she really gave. But Sandel told me that when he presents the choice to students in his ethics classes, the reaction is split, even as he asks in two different ways. First, he asks whether they’d be interested in the chatbot if their loved one bequeathed it to them upon their death. Then he asks if they’d be interested in building a model of themselves to bequeath to others. Oh, and what if a chatbot is built without input from the person getting resurrected? The notion that someone chose to be represented posthumously in a digital avatar seems important, but even then, what if the model makes mistakes? What if it misrepresents—slanders, even—the dead?

Soon enough, these questions won’t be theoretical, and there is no broad agreement about whom—or even what—to ask. We’re approaching a more fundamental ethical quandary than we often hear about in discussions around AI: human bias embedded in algorithms, privacy and surveillance concerns, mis- and disinformation, cheating and plagiarism, the displacement of jobs, deepfakes. These issues are really all interconnected—Osama bot Laden might make the real guy seem kinda reasonable or just preach jihad to tweens—and they all need to be confronted. We think a lot about the mundane (kids cheating in AP History) and the extreme (some advanced AI extinguishing the human race), but we’re more likely to careen through the messy corridor in between. We need to think about what’s allowed and how we’ll decide.

(cut)

Our governing troubles are compounded by the fact that, while a few firms are leading the way on building these unprecedented machines, the technology will soon become diffuse. More of the codebase for these models is likely to become publicly available, enabling highly talented computer scientists to build their own in the garage. (Some folks at Stanford have already built a ChatGPT imitator for around $600.) What happens when some entrepreneurial types construct a model of a dead person without the family’s permission? (We got something of a preview in April when a German tabloid ran an AI-generated interview with ex–Formula 1 driver Michael Schumacher, who suffered a traumatic brain injury in 2013. His family threatened to sue.) What if it’s an inaccurate portrayal or it suffers from what computer scientists call “hallucinations,” when chatbots spit out wildly false things? We’ve already got revenge porn. What if an old enemy constructs a false version of your dead wife out of spite? “There’s an important tension between open access and safety concerns,” Reich says. “Nuclear fusion has enormous upside potential,” too, he adds, but in some cases, open access to the flesh and bones of AI models could be like “inviting people around the world to play with plutonium.”


Yes, there was a Black Mirror episode (Be Right Back) about this issue.  The wiki is here.

Monday, May 29, 2023

Rules

Almeida, G., Struchiner, N., Hannikainen, I.
(April 17, 2023). Kevin Tobia (Ed.), 
Cambridge Handbook of Experimental Jurisprudence. 
Cambridge University Press, Forthcoming

Abstract

Rules are ubiquitous. They figure prominently in all kinds of practical reasoning. Rules are especially important in jurisprudence, occupying a prominent role in answers to the question of “what is law?” In this chapter, we start by reviewing the evidence showing that both textual and extra-textual elements exert influence over rule violation judgments (section II). Most studies about rules contrast text with an extra-textual element identified as the “purpose” or “spirit” of the rule. But what counts as the purpose or the spirit of a rule? Is it the goal intended by the rule maker? Or is purpose necessarily moral? Section III reviews the results of experiments designed to answer these questions. These studies show that the extra-textual element that's relevant for the folk concept of rule is moral in nature. Section IV turns to the different explanations that have been entertained in the literature for the pattern of results described in Sections II and III. In section V we discuss some other extra-textual elements that have been investigated in the literature. Finally, in section VI, we connect the results about rules with other issues in legal philosophy. We conclude with a brief discussion of future directions.

Conclusion

In this chapter, we have provided an overview of the experimental jurisprudence of rules. We started by reviewing evidence that shows that extra-textual elements influence rule violation judgments (section II). We then have seen that those elements are likely moral in nature (section III). There are several ways to conceptualize the relationship between the moral and descriptive elements at play in rule violation judgments. We have reviewed some of them in section IV, where we argued that the evidence favors the hypothesis that the concept of rule has a dual character structure. In section V, we reviewed some recent studies showing that other elements, such as enforcement, also play a role in the concept of rule. Finally, in section VI, we considered the implications of these results for some other debates in legal philosophy.

While we have focused on research developed within experimental jurisprudence, empirical work in moral psychology and experimental philosophy have investigated several other questions related to rules which might be of interest for legal philosophers, such as closure rules and the process of learning rules (Nichols, 2004, 2021). But an even larger set of questions about the concept of rule haven’t been explored from an empirical perspective yet. We will end this chapter by discussing a few of them.


If you do legal work, this chapter may help with your expertise. The authors explore how ordinary people understand the law. Are they more intuitive in terms of interpretation or do they think that law is intrinsically moral?

Sunday, May 28, 2023

Above the law? How motivated moral reasoning shapes evaluations of high performer unethicality

Campbell, E. M., Welsh, D. T., & Wang, W. (2023).
Journal of Applied Psychology.
Advance online publication.

Abstract

Recent revelations have brought to light the misconduct of high performers across various fields and occupations who were promoted up the organizational ladder rather than punished for their unethical behavior. Drawing on principles of motivated moral reasoning, we investigate how employee performance biases supervisors’ moral judgment of employee unethical behavior and how supervisors’ performance-focus shapes how they account for moral judgments in promotion recommendations. We test our model in three studies: a field study of 587 employees and their 124 supervisors at a Fortune 500 telecom company, an experiment with two samples of working adults, and an experiment that directly varied explanatory mechanisms. Evidence revealed a moral double standard such that supervisors rendered less punitive judgment of the unethical acts of higher performing employees. In turn, supervisors’ bottom-line mentality (i.e., fixation on achieving results) influenced the degree to which they incorporated their punitive judgments into promotability considerations. By revealing the moral leniency afforded to higher performers and the uneven consequences meted out by supervisors, our results carry implications for behavioral ethics research and for organizations seeking to retain and promote their higher performers while also maintaining ethical standards that are applied fairly across employees.

Here is the opening:

Allegations of unethical conduct perpetrated by prominent, high-performing professionals have been exploding across newsfeeds (Zacharek et al., 2017). From customer service employees and their managers (e.g., Wells Fargo fake accounts; Levitt & Schoenberg, 2020), to actors, producers, and politicians (e.g., long-term corruption of Belarus’ president; Simmons, 2020), to reporters and journalists (e.g., the National Broadcasting Company’s alleged cover-up; Farrow, 2019), to engineers and executives (e.g., Volkswagen’s emissions fraud; Vlasic, 2017), the public has been repeatedly shocked by the egregious behaviors committed by individuals recognized as high performers within their respective fields (Bennett, 2017). 

In the wake of such widespread unethical, corrupt, and exploitative behavior, many have wondered how supervisors could have systematically ignored the conduct of high-performing individuals for so long while they ascended organizational ladders. How could such misconduct have resulted in their advancement to leadership roles rather than stalled or derailed the transgressors’ careers?

The story of Carlos Ghosn at Nissan hints at why and when individuals’ unethical behavior (i.e., lying, cheating, and stealing; TreviƱo et al., 2006, 2014) may result in less punitive judgment (i.e., the extent to which observed behavior is morally evaluated as negative, incorrect, or inappropriate). During his 30-year career in the automotive industry, Ghosn differentiated himself as a high performer known for effective cost-cutting, strategic planning, and spearheading change; however, in 2018, he fell from grace over allegations of years of financial malfeasance and embezzlement (Leggett, 2019). When allegations broke, Nissan’s CEO stood firm in his punitive judgment that Ghosn’s behavior “cannot be tolerated by the company” (Kageyama, 2018). Still, many questioned why the executives levied judgment on the misconduct that they had overlooked for years. Tokyo bureau chief of the New York Times, Motoko Rich, reasoned that Ghosn “probably would have continued to get away with it … if the company was continuing to be successful. But it was starting to slow down. There were signs that the magic had gone” (Barbaro, 2019). Similarly, an executive pointed squarely to the relevance of Ghosn’s performance, lamenting: “what [had he] done for us lately?” (Chozick & Rich, 2018). As a high performer, Ghosn’s unethical behavior evaded punitive judgment and career consequences from Nissan executives, but their motivation to leniently judge Ghosn’s behavior seemed to wane with his level of performance. In her reporting, Rich observed: “you can get away with whatever you want as long as you’re successful. And once you’re not so successful anymore, then all that rule-breaking and brashness doesn’t look so attractive and appealing anymore” (Barbaro, 2019).

Saturday, May 27, 2023

Costly Distractions: Focusing on Individual Behavior Undermines Support for Systemic Reforms

Hagmann, D., Liao, Y., Chater, N., & 
Loewenstein, G. (2023, April 22). 

Abstract

Policy challenges can typically be addressed both through systemic changes (e.g., taxes and mandates) and by encouraging individual behavior change. In this paper, we propose that, while in principle complementary, systemic and individual perspectives can compete for the limited attention of people and policymakers. Thus, directing policies in one of these two ways can distract the public’s attention from the other—an “attentional opportunity cost.” In two pre-registered experiments (n = 1,800) covering three high-stakes domains (climate change, retirement savings, and public health), we show that when people learn about policies targeting individual behavior (such as awareness campaigns), they are more likely to themselves propose policies that target individual behavior, and to hold individuals rather than organizational actors responsible for solving the problem, than are people who learned about systemic policies (such as taxes and mandates, Study 1). This shift in attribution of responsibility has behavioral consequences: people exposed to individual interventions are more likely to donate to an organization that educates individuals rather than one seeking to effect systemic reforms (Study 2). Policies targeting individual behavior may, therefore, have the unintended consequence of redirecting attention and attributions of responsibility away from systemic change to individual behavior.

Discussion

Major policy problems likely require a realignment of systemic incentives and regulations, as well as measures aimed at individual behavior change. In practice, systemic reforms have been difficult to implement, in part due to political polarization and in part because concentrated interest groups have lobbied against changes that threaten their profits. This has shifted the focus to individual behavior. The past two decades, in particular, have seen increasing popularity of ‘nudges’: interventions that can influence individual behavior without substantially changing economic incentives (Thaler &Sunstein, 2008). For example, people may be defaulted into green energy plans (Sunstein &Reisch, 2013) or 401(k) contributions (Madrian & Shea, 2001), and restaurants may varywhether they place calorie labels on the left or the right side of the menu (Dallas, Liu, &Ubel, 2019). These interventions have enjoyed tremendous popularity, because they can often be implemented even when opposition to systemic reforms is too large to change economic incentives. Moreover, it has been argued that nudges incur low economic costs, making them extremely cost effective even when the gains are small on an absolute scaleTor & Klick (2022).

In this paper, we document an important and so far unacknowledged cost of such interventions targeting individual behavior, first postulated by Chater and Loewenstein(2022). We show that when people learn about interventions that target individual behavior, they shift their attention away from systemic reforms compared to those who learn about systemic reforms. Across two experiments, we find that this subsequently  affects their attitudes and behaviors. Specifically, they become less likely to propose systemic policy reforms, hold governments less responsible for solving the policy problem, and are less likely to support organizations that seek to promote systemic reform.The findings of this study may not be news to corporate PR specialists. Indeed, as would be expected according to standard political economy considerations (e.g., Stigler,1971), organizations act in a way that is consistent with a belief in this attentional opportunity cost account. Initiatives that have captured the public’s attention, including recycling campaigns and carbon footprint calculators, have been devised by the very organizations that stood to lose from further regulation that might have hurt their bottomline (e.g., bottle bills and carbon taxes, respectively), potentially distracting individual citizens, policymakers, and the wider public debate from systemic changes that are likely to be required to shift substantially away from the status quo.

Friday, May 26, 2023

A General Motivational Architecture for Human and Animal Personality

Del Giudice, M. (2022).
Neuroscience & Biobehavioral Reviews, 144, 104967.

Abstract

To achieve integration in the study of personality, researchers need to model the motivational processes that give rise to stable individual differences in behavior, cognition, and emotion. The missing link in current approaches is a motivational architecture—a description of the core set of mechanisms that underlie motivation, plus a functional account of their operating logic and inter-relations. This paper presents the initial version of such an architecture, the General Architecture of Motivation (GAM). The GAM offers a common language for individual differences in humans and other animals, and a conceptual toolkit for building species-specific models of personality. The paper describes the main components of the GAM and their interplay, and examines the contribution of these components to the emergence of individual differences. The final section discusses how the GAM can be used to construct explicit functional models of personality, and presents a roadmap for future research.

Conclusion

To realize the dream of an integrated science of personality, researchers will have to move beyond structural descriptions and start building realistic functional models of individual differences. I believe that ground-up adaptationism guided by evolutionary theory is the way of the future (Lukaszewski, 2021); however, I also believe that the effort spent in teasing out the logic of specific mechanisms (e.g., the anger program; Lukaszewski et al., 2020; Sell et al., 2017) will not pay off in the domain of personality without the scaffolding of a broader theory of motivation—and an architectural framework to link the mechanisms together and explain their dynamic interplay.

In this paper, I have built on previous contributions to present the initial version of the GAM, a general motivational architecture that can be adapted to fit a broad range of animal species. The framework of the GAM should make it easier to integrate theoretical and empirical results from a variety of research areas, develop functional models of personality, and—not least—compare the personality of different species based on explicit functional principles (e.g., different sets of motivational systems, differences in activation/deactivation parameters), thus overcoming the limitations of standard factor-analytic descriptions. As I noted in the introduction, the GAM is intended as a work in progress, open to integrations and revisions. I hope this proposal will stimulate the curiosity of other scholars and spark the kind of creative, integrative work that can bring the science of personality to its well-deserved maturity.




Thursday, May 25, 2023

Unselļ¬sh traits and social decision-making patterns characterize six populations of real-world extraordinary altruists

Rhoads, S. A., Vekaria, K. M. et al. (2023). 
Nature Communications
Published online 31 March 23

Abstract

Acts of extraordinary, costly altruism, in which significant risks or costs are assumed to benefit strangers, have long represented a motivational puzzle. But the features that consistently distinguish individuals who engage in such acts have not been identified. We assess six groups of real-world extraordinary altruists who had performed costly or risky and normatively rare (<0.00005% per capita) altruistic acts: heroic rescues, non-directed and directed kidney donations, liver donations, marrow or hematopoietic stem cell donations, and humanitarian aid work. Here, we show that the features that best distinguish altruists from controls are traits and decision-making patterns indicating unusually high valuation of others’ outcomes: high Honesty-Humility, reduced Social Discounting, and reduced Personal Distress. Two independent samples of adults who were asked what traits would characterize altruists failed to predict this pattern. These findings suggest that theories regarding self- focused motivations for altruism (e.g., self-enhancing reciprocity, reputation enhancement) alone are insufficient explanations for acts of real-world self- sacrifice.

From the Discussion Section

That extraordinary altruists are consistently distinguished by a common set of traits linked to unselļ¬shness is particularly noteworthy given the differences in the demographics of the various altruistic groups we sampled and the differences in the forms of altruism they have engaged in—from acts of physical heroism to the decision to donate bone marrow. This ļ¬nding replicates and extends ļ¬ndings from a previous study demonstrating that extraordinary altruists show heighted subjective valuation of socially distant others. In addition, our results are consistent with a recent meta-analysis of 770 studies ļ¬nding a strong and consistent relationship between Honesty-Humility and various forms of self-reported and laboratory-measured prosociality. Coupled with ļ¬ndings that low levels of unselļ¬sh traits (e.g., low Honesty-Humility, high social discounting) correspond to exploitative and antisocial behaviors such as cheating and aggression, these results also lend support to the notion of a bipolar caring continuum along which individuals vary in the degree to which they subjectively value (care about) the welfare of others. They further suggest altruism—arguably the willingness to be voluntarily “exploited” by others—to be the opposite of phenotypes like psychopathy that are characterized by exploiting others. These traits may best predict behavior in novel contexts lacking strong norms, particularly when decisions are made rapidly and intuitively. Notably, people who are higher in prosociality are more likely to participate in psychological research to begin with—thus the observed differences between altruists and controls may be underestimates (i.e., population-level differences may be larger).

Wednesday, May 24, 2023

Fighting for our cognitive liberty

Liz Mineo
The Harvard Gazette
Originally published 26 April 23

Imagine going to work and having your employer monitor your brainwaves to see whether you’re mentally tired or fully engaged in filling out that spreadsheet on April sales.

Nita Farahany, professor of law and philosophy at Duke Law School and author of “The Battle for Your Brain: Defending the Right to Think Freely in the Age of Neurotechnology,” says it’s already happening, and we all should be worried about it.

Farahany highlighted the promise and risks of neurotechnology in a conversation with Francis X. Shen, an associate professor in the Harvard Medical School Center for Bioethics and the MGH Department of Psychiatry, and an affiliated professor at Harvard Law School. The Monday webinar was co-sponsored by the Harvard Medical School Center for Bioethics, the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School, and the Dana Foundation.

Farahany said the practice of tracking workers’ brains, once exclusively the stuff of science fiction, follows the natural evolution of personal technology, which has normalized the use of wearable devices that chronicle heartbeats, footsteps, and body temperatures. Sensors capable of detecting and decoding brain activity already have been embedded into everyday devices such as earbuds, headphones, watches, and wearable tattoos.

“Commodification of brain data has already begun,” she said. “Brain sensors are already being sold worldwide. It isn’t everybody who’s using them yet. When it becomes an everyday part of our everyday lives, that’s the moment at which you hope that the safeguards are already in place. That’s why I think now is the right moment to do so.”

Safeguards to protect people’s freedom of thought, privacy, and self-determination should be implemented now, said Farahany. Five thousand companies around the world are using SmartCap technologies to track workers’ fatigue levels, and many other companies are using other technologies to track focus, engagement and boredom in the workplace.

If protections are put in place, said Farahany, the story with neurotechnology could be different than the one Shoshana Zuboff warns of in her 2019 book, “The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power.” In it Zuboff, Charles Edward Wilson Professor Emerita at Harvard Business School, examines the threat of the widescale corporate commodification of personal data in which predictions of our consumer activities are bought, sold, and used to modify behavior.

Tuesday, May 23, 2023

Machine learning uncovers the most robust self-report predictors of relationship quality across 43 longitudinal couples studies

Joel, S., Eastwick, P. W., et al. (2020).
PNAS of the United States of America,
117(32), 19061–19071.

Abstract

Given the powerful implications of relationship quality for health and well-being, a central mission of relationship science is explaining why some romantic relationships thrive more than others. This large-scale project used machine learning (i.e., Random Forests) to 1) quantify the extent to which relationship quality is predictable and 2) identify which constructs reliably predict relationship quality. Across 43 dyadic longitudinal datasets from 29 laboratories, the top relationship-specific predictors of relationship quality were perceived-partner commitment, appreciation, sexual satisfaction, perceived-partner satisfaction, and conflict. The top individual-difference predictors were life satisfaction, negative affect, depression, attachment avoidance, and attachment anxiety. Overall, relationship-specific variables predicted up to 45% of variance at baseline, and up to 18% of variance at the end of each study. Individual differences also performed well (21% and 12%, respectively). Actor-reported variables (i.e., own relationship-specific and individual-difference variables) predicted two to four times more variance than partner-reported variables (i.e., the partner’s ratings on those variables). Importantly, individual differences and partner reports had no predictive effects beyond actor-reported relationship-specific variables alone. These findings imply that the sum of all individual differences and partner experiences exert their influence on relationship quality via a person’s own relationship-specific experiences, and effects due to moderation by individual differences and moderation by partner-reports may be quite small. Finally, relationship-quality change (i.e., increases or decreases in relationship quality over the course of a study) was largely unpredictable from any combination of self-report variables. This collective effort should guide future models of relationships.

Significance

What predicts how happy people are with their romantic relationships? Relationship science—an interdisciplinary field spanning psychology, sociology, economics, family studies, and communication—has identified hundreds of variables that purportedly shape romantic relationship quality. The current project used machine learning to directly quantify and compare the predictive power of many such variables among 11,196 romantic couples. People’s own judgments about the relationship itself—such as how satisfied and committed they perceived their partners to be, and how appreciative they felt toward their partners—explained approximately 45% of their current satisfaction. The partner’s judgments did not add information, nor did either person’s personalities or traits. Furthermore, none of these variables could predict whose relationship quality would increase versus decrease over time.

Conclusion

From a public interest standpoint, this study provides provisional answers to the perennial question “What predicts how satisfied and committed I will be with my relationship partner?” Experiencing negative affect, depression, or insecure attachment are surely relationship risk factors. But if people nevertheless manage to establish a relationship characterized by appreciation, sexual satisfaction, and a lack of conflict—and they perceive their partner to be committed and responsive—those individual risk factors may matter little. That is, relationship quality is predictable from a variety of constructs, but some matter more than others, and the most proximal predictors are features that characterize a person’s perception of the relationship itself.