Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Wednesday, June 7, 2023

AI machines aren’t ‘hallucinating’. But their makers are

Naomi Klein
The Guardian
Originally published 8 May 23

Inside the many debates swirling around the rapid rollout of so-called artificial intelligence, there is a relatively obscure skirmish focused on the choice of the word “hallucinate”.

This is the term that architects and boosters of generative AI have settled on to characterize responses served up by chatbots that are wholly manufactured, or flat-out wrong. Like, for instance, when you ask a bot for a definition of something that doesn’t exist and it, rather convincingly, gives you one, complete with made-up footnotes. “No one in the field has yet solved the hallucination problems,” Sundar Pichai, the CEO of Google and Alphabet, told an interviewer recently.

That’s true – but why call the errors “hallucinations” at all? Why not algorithmic junk? Or glitches? Well, hallucination refers to the mysterious capacity of the human brain to perceive phenomena that are not present, at least not in conventional, materialist terms. By appropriating a word commonly used in psychology, psychedelics and various forms of mysticism, AI’s boosters, while acknowledging the fallibility of their machines, are simultaneously feeding the sector’s most cherished mythology: that by building these large language models, and training them on everything that we humans have written, said and represented visually, they are in the process of birthing an animate intelligence on the cusp of sparking an evolutionary leap for our species. How else could bots like Bing and Bard be tripping out there in the ether?

(cut)

Hallucination #3: tech giants can be trusted not to break the world

Asked if he is worried about the frantic gold rush ChatGPT has already unleashed, Altman said he is, but added sanguinely: “Hopefully it will all work out.” Of his fellow tech CEOs – the ones competing to rush out their rival chatbots – he said: “I think the better angels are going to win out.”

Better angels? At Google? I’m pretty sure the company fired most of those because they were publishing critical papers about AI, or calling the company out on racism and sexual harassment in the workplace. More “better angels” have quit in alarm, most recently Hinton. That’s because, contrary to the hallucinations of the people profiting most from AI, Google does not make decisions based on what’s best for the world – it makes decisions based on what’s best for Alphabet’s shareholders, who do not want to miss the latest bubble, not when Microsoft, Meta and Apple are already all in.

(cut)

Is all of this overly dramatic? A stuffy and reflexive resistance to exciting innovation? Why expect the worse? Altman reassures us: “Nobody wants to destroy the world.” Perhaps not. But as the ever-worsening climate and extinction crises show us every day, plenty of powerful people and institutions seem to be just fine knowing that they are helping to destroy the stability of the world’s life-support systems, so long as they can keep making record profits that they believe will protect them and their families from the worst effects. Altman, like many creatures of Silicon Valley, is himself a prepper: back in 2016, he boasted: “I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force and a big patch of land in Big Sur I can fly to.”

Tuesday, June 6, 2023

Merging Minds: The Conceptual and Ethical Impacts of Emerging Technologies for Collective Minds

Lyreskog, D.M., Zohny, H., Savulescu, J. et al.
Neuroethics 16, 12 (2023).

Abstract

A growing number of technologies are currently being developed to improve and distribute thinking and decision-making. Rapid progress in brain-to-brain interfacing and swarming technologies promises to transform how we think about collective and collaborative cognitive tasks across domains, ranging from research to entertainment, and from therapeutics to military applications. As these tools continue to improve, we are prompted to monitor how they may affect our society on a broader level, but also how they may reshape our fundamental understanding of agency, responsibility, and other key concepts of our moral landscape.

In this paper we take a closer look at this class of technologies – Technologies for Collective Minds – to see not only how their implementation may react with commonly held moral values, but also how they challenge our underlying concepts of what constitutes collective or individual agency. We argue that prominent contemporary frameworks for understanding collective agency and responsibility are insufficient in terms of accurately describing the relationships enabled by Technologies for Collective Minds, and that they therefore risk obstructing ethical analysis of the implementation of these technologies in society. We propose a more multidimensional approach to better understand this set of technologies, and to facilitate future research on the ethics of Technologies for Collective Minds.

(cut)

A new field

In this paper, we have argued that new and emerging TCMs challenge commonly held views on collective and joint actions in such a way that our conceptual and ethical frameworks appear unsuitable this domain. This inadequacy hinders both conceptual analysis and ethical assessment, and we are therefore in urgent need of a conceptual overhaul which facilitates rather than obstructs ethical assessment. In this paper, we have but taken the first steps to bring about this overhaul: while our four categories – DigiMinds, UniMinds, NetMinds and MacroMinds – can help us think about the dimensions of Collective Minds and their ethical implications, it remains an open question how we should treat TCMs, and which aspects of them are most ethically salient, as this will depend on a number of parameters, including (A) the technological specifications of any TCM, (B) the domain in which said TCM is deployed, (military, medicine, research, entertainment, etc.) and (C) reversibility (i.e. whether joining a given Collective Mind is permanent, or risk leaving significant permanent impacts). It is also worth recalling that these four categories, while based on technological capacities, are only conceptual tools to help navigate the ethical landscapes of Collective Minds. What we are likely to see in the coming years is the emergence of TCMs which do not easily lend themselves to be clearly boxed into any of these four categories, under descriptions such as “Cloudminds”, “Mindplexes”, or “Decentralized Selves”.

In anticipating and assessing the ethical impacts of Collective Minds, we propose that we move beyond binary approaches to thinking about agency and responsibility (i.e. that they are either individual or collective), and that frameworks focus attention instead on the specifics of ABCs as stated above. Furthermore, we stress the need to fluently and continuously refine conceptual tools to encompass those specifics, to adapt our ethical frameworks with equal agility.

Monday, June 5, 2023

Why Conscious AI Is a Bad, Bad Idea

Anil Seth
Nautilus.us
Originally posted 9 MAY 23

Artificial intelligence is moving fast. We can now converse with large language models such as ChatGPT as if they were human beings. Vision models can generate award-winning photographs as well as convincing videos of events that never happened. These systems are certainly getting smarter, but are they conscious? Do they have subjective experiences, feelings, and conscious beliefs in the same way that you and I do, but tables and chairs and pocket calculators do not? And if not now, then when—if ever—might this happen?

While some researchers suggest that conscious AI is close at hand, others, including me, believe it remains far away and might not be possible at all. But even if unlikely, it is unwise to dismiss the possibility altogether. The prospect of artificial consciousness raises ethical, safety, and societal challenges significantly beyond those already posed by AI. Importantly, some of these challenges arise even when AI systems merely seem to be conscious, even if, under the hood, they are just algorithms whirring away in subjective oblivion.

(cut)

There are two main reasons why creating artificial consciousness, whether deliberately or inadvertently, is a very bad idea. The first is that it may endow AI systems with new powers and capabilities that could wreak havoc if not properly designed and regulated. Ensuring that AI systems act in ways compatible with well-specified human values is hard enough as things are. With conscious AI, it gets a lot more challenging, since these systems will have their own interests rather than just the interests humans give them.

The second reason is even more disquieting: The dawn of conscious machines will introduce vast new potential for suffering in the world, suffering we might not even be able to recognize, and which might flicker into existence in innumerable server farms at the click of a mouse. As the German philosopher Thomas Metzinger has noted, this would precipitate an unprecedented moral and ethical crisis because once something is conscious, we have a responsibility toward its welfare, especially if we created it. The problem wasn’t that Frankenstein’s creature came to life; it was that it was conscious and could feel.

These scenarios might seem outlandish, and it is true that conscious AI may be very far away and might not even be possible. But the implications of its emergence are sufficiently tectonic that we mustn’t ignore the possibility. Certainly, nobody should be actively trying to create machine consciousness.

Existential concerns aside, there are more immediate dangers to deal with as AI has become more humanlike in its behavior. These arise when AI systems give humans the unavoidable impression that they are conscious, whatever might be going on under the hood. Human psychology lurches uncomfortably between anthropocentrism—putting ourselves at the center of everything—and anthropomorphism—projecting humanlike qualities into things on the basis of some superficial similarity. It is the latter tendency that’s getting us in trouble with AI.

Sunday, June 4, 2023

We need to examine the beliefs of today’s tech luminaries

Anjana Ahuja
Financial Times
Originally posted 10 MAY 23

People who are very rich or very clever, or both, sometimes believe weird things. Some of these beliefs are captured in the acronym Tescreal. The letters represent overlapping futuristic philosophies — bookended by transhumanism and longtermism — favoured by many of AI’s wealthiest and most prominent supporters.

The label, coined by a former Google ethicist and a philosopher, is beginning to circulate online and usefully explains why some tech figures would like to see the public gaze trained on fuzzy future problems such as existential risk, rather than on current liabilities such as algorithmic bias. A fraternity that is ultimately committed to nurturing AI for a posthuman future may care little for the social injustices committed by their errant infant today.

As well as transhumanism, which advocates for the technological and biological enhancement of humans, Tescreal encompasses extropianism, a belief that science and technology will bring about indefinite lifespan; singularitarianism, the idea that an artificial superintelligence will eventually surpass human intelligence; cosmism, a manifesto for curing death and spreading outwards into the cosmos; rationalism, the conviction that reason should be the supreme guiding principle for humanity; effective altruism, a social movement that calculates how to maximally benefit others; and longtermism, a radical form of utilitarianism which argues that we have moral responsibilities towards the people who are yet to exist, even at the expense of those who currently do.

(cut, and the ending)

Gebru, along with others, has described such talk as fear-mongering and marketing hype. Many will be tempted to dismiss her views — she was sacked from Google after raising concerns over energy use and social harms linked to large language models — as sour grapes, or an ideological rant. But that glosses over the motivations of those running the AI show, a dazzling corporate spectacle with a plot line that very few are able to confidently follow, let alone regulate.

Repeated talk of a possible techno-apocalypse not only sets up these tech glitterati as guardians of humanity, it also implies an inevitability in the path we are taking. And it distracts from the real harms racking up today, identified by academics such as Ruha Benjamin and Safiya Noble. Decision-making algorithms using biased data are deprioritising black patients for certain medical procedures, while generative AI is stealing human labour, propagating misinformation and putting jobs at risk.

Perhaps those are the plot twists we were not meant to notice.


Saturday, June 3, 2023

The illusion of the mind–body divide is attenuated in males.

Berent, I. 
Sci Rep 13, 6653 (2023).
https://doi.org/10.1038/s41598-023-33079-1

Abstract

A large literature suggests that people are intuitive Dualists—they tend to perceive the mind as ethereal, distinct from the body. Here, we ask whether Dualism emanates from within the human psyche, guided, in part, by theory of mind (ToM). Past research has shown that males are poorer mind-readers than females. If ToM begets Dualism, then males should exhibit weaker Dualism, and instead, lean towards Physicalism (i.e., they should view bodies and minds alike). Experiments 1–2 show that males indeed perceive the psyche as more embodied—as more likely to emerge in a replica of one’s body, and less likely to persist in its absence (after life). Experiment 3 further shows that males are less inclined towards Empiricism—a putative byproduct of Dualism. A final analysis confirms that males’ ToM scores are lower, and ToM scores further correlate with embodiment intuitions (in Experiments 1–2). These observations (from Western participants) cannot establish universality, but the association of Dualism with ToM suggests its roots are psychological. Thus, the illusory mind–body divide may arise from the very workings of the human mind.

Discussion

People tend to consider the mind as ethereal, distinct from the body. This intuitive Dualist stance has been demonstrated in adults and children, in Western and non-Western participants and its consequences on reasoning are widespread.

Why people are putative Dualists, however, is unclear. In particular, one wonders whether Dualism arises only by cultural transmission, or whether the illusion of the mind–body divide can also emerge naturally, from ToM.

To address this question, here, we investigated whether individual differences in ToM capacities, occurring within the neurotypical population—between males and females—are linked to Dualism. Experiments 1–2 show that this is indeed the case.

Males, in this sample, considered the psyche as more strongly embodied than females: they believed that epistemic states are more likely to emerge in a replica of one’s body (in Experiment 1) and that psychological traits are less likely to persist upon the body’s demise, in the afterlife (in Experiment 2). Experiment 3 further showed that males are also more likely to consider psychological traits as innate—this is expected by past findings, suggesting that Dualism begets Empiricism.

A follow-up analysis has confirmed that these differences in reasoning about bodies and minds are linked to ToM. Not only did males in this sample score lower than females on ToM, but their ToM scores correlated with their Dualist intuitions.

As noted, these results ought to be interpreted with caution, as the gender differences observed here may not hold universally, and it certainly does not speak to the reasoning of any individual person. And indeed, ToM abilities demonstrably depend on multiple factors, including linguistic experience and culture. But inasmuch as females show superior ToM, they ought to lean towards Dualism and Empiricism. Dualism, then, is linked to ToM.

Friday, June 2, 2023

Is it good to feel bad about littering? Conflict between moral beliefs and behaviors for everyday transgressions

Schwartz, Stephanie A. and Inbar, Yoel
SSRN.
Originally posted 22 June 22

Abstract

People sometimes do things that they think are morally wrong. We investigate how actor’s perceptions of the morality of their own behaviors affects observers’ evaluations. In Study 1 (n = 302), we presented participants with six different descriptions of actors who routinely engaged in a morally questionable behavior and varied whether the actors thought the behavior was morally wrong. Actors who believed their behavior was wrong were seen as having better moral character, but their behavior was rated as more wrong. In Study 2 (n = 391) we investigated whether perceptions of actor metadesires were responsible for the effects of actor beliefs on judgments. We used the same stimuli and measures as in Study 1 but added a measure of the actor’s perceived desires to engage in the behaviors. As predicted, the effect of actors’ moral beliefs on judgments of their behavior and moral character was mediated by perceived metadesires.

General Discussion

In two studies, we find that actors’ beliefs about their own everyday immoral behaviors affect both how the acts and the actors are evaluated—albeit in opposite directions. An actor’s belief that his or her act is morally wrong causes observers to see the act itself as less morally acceptable, while, at the same time, it leads to more positive character judgments of the actor. In Study 2, we find that these differences in character judgments are mediated by people’s perceptions of the actor’s metadesires. Actors who see their behavior as morally wrong are presumed to have a desire not to engage in it, and this in turn leads to more positive evaluations of their character. These results suggest that one benefit of believing one’s own behavior to be immoral is that others—if they know this—will evaluate one’s character more positively.

(cut)

Honest Hypocrites 

In research on moral judgments of hypocrites, Jordan et al. (2017) found that people who publicly espouse a moral standard that they privately violate are judged particularly negatively.  However, they also found that “honest hypocrites” (those who publicly condemn a behavior while admitting they engage in it themselves) are judged more positively than traditional hypocrites and equivalently to control transgressors (people who simply engage in the negative behavior without taking a public stand on its acceptability). This might seem to contradict our findings in the current studies, where people who transgressed despite thinking that the behavior was morally wrong were judged more positively than those who simply transgressed. We believe the key distinction that explains the difference between Jordan et al.’s results and ours is that in their paradigm, hypocrites publicly condemned others for engaging in the behavior in question.  As Jordan et al. show, public condemnation is interpreted as a strong signal that someone is unlikely to engage in that behavior themselves; hypocrites therefore are disliked both for
engaging in a negative behavior and for falsely signaling (by their public condemnation) that they wouldn’t. Honest hypocrites, who explicitly state that they engage in the negative behavior, are not falsely signaling. However, Jordan et al.’s scenarios imply to participants that honest hypocrites do condemn others—something that may strike people as unfair coming from a person who engages in the behavior themselves. Thus, honest hypocrites may be penalized for public condemnation, even as they are credited for more positive metadesires. In contrast, in our studies participants were told that the scenario protagonists thought the behavior was morally wrong but not that they publicly condemned anyone else for engaging in it. This may have allowed protagonists to benefit from more positive perceived metadesires without being penalized for public condemnation. This explanation is admittedly speculative but could be tested in future research that we outline below.


Suppose you do something bad. Will people blame you more if you knew it was wrong? Or will they blame you less?

The answer seems to be: They will think your act is more wrong, but your character is less bad.

Thursday, June 1, 2023

Why Live? Three Authors Who Saved Me During an Existential Crisis

Celine Leboeuf
Medium.com
Originally posted 22 AUG 21

Here are two excerpts:

At the age of thirty-two, these questions plunged me into an existential crisis — a period of doubt about the value of my very existence given the inevitability of my demise. Did a fulfilling life simply mean checking off all the boxes? Or was there a deeper meaning to my finite time on Earth? If the latter was true, I had to confront the promises I’d been handed down from my society since childhood. My teachers and family had encouraged me to focus on professional success, and my culture added that a romantic relationship, solid friendships, and community would seal the deal. This was not a late “quarter-life crisis” about which careers or interpersonal connections to cultivate. No, what bothered me was death itself. Did any career or relationship matter in the face of it? To understand whether life was worth living, I now needed to grapple with my mortality.

My instinct as a philosophy professor was to dig into works on the meaning of life. I had received a Ph.D. in the field three years earlier, and during my final year as a graduate teaching assistant, I’d helped with a course on the meaning of life. Although my academic research was about feminism and the philosophy of race, I knew I had the tools to solve my predicament. So that’s how I found myself on a journey through the history of literature, psychology, and philosophy to answer my doubts about life’s worthwhileness. From August 2018 to June 2019, I woke up at 6 o’clock nearly every morning to pour over dozens of texts on the meaning of life.

Throughout my quest to understand why life was worth living, I found hints of answer in many places: Friedrich Nietzsche; the contemporary philosophers Susan Wolf and Lars Svendsen; Hubert Dreyfus and Sean Kelly’s study of Western literature and nihilism; Victor Frankl’s famous Man’s Search for Meaning; and the psychiatrist Irvin Yalom’s massive Existential Psychotherapy (yes, I did read all five-hundred twenty-four pages). While all these readings shed light on my existential preoccupations, three works stood out along the way. They each articulate a different path toward understanding why life is worth living, and I recommend them to anyone who is seeking to answer this question.

(cut)

My life was worth living — despite its finitude. The text cured me of my existential worries, but I doubt that I would have learned from it had I not first pondered Tolstoy’s and Camus’s solutions. Although it may be less known than either A Confession or The Myth of Sisyphus, “Is Life Worth Living?” proved to be invaluable. James bravely dives into the question of life’s worthwhileness in the face of death. And perhaps because the text was originally an address, his style is deeply personal and moving. But more than anything, what was so powerful was that his argument carved a middle ground between traditionally religious and atheistic responses to my crisis.

James has marked me — for life. I’ve shared his address with friends, and it became the impetus for a course that I now teach at my university on the philosophy of death. “Believe that life is worth living, and your belief will help create the fact” has taken root in my psyche. I repeat the phrase like a mantra whenever I question the purpose of my finite existence.

Wednesday, May 31, 2023

Can AI language models replace human participants?

Dillon, D, Tandon, N., Gu, Y., & Gray, K.
Trends in Cognitive Sciences
May 10, 2023

Abstract

Recent work suggests that language models such as GPT can make human-like judgments across a number of domains. We explore whether and when language models might replace human participants in psychological science. We review nascent research, provide a theoretical model, and outline caveats of using AI as a participant.

(cut)

Does GPT make human-like judgments?

We initially doubted the ability of LLMs to capture human judgments but, as we detail in Box 1, the moral judgments of GPT-3.5 were extremely well aligned with human moral judgments in our analysis (r= 0.95;
full details at https://nikett.github.io/gpt-as-participant). Human morality is often argued to be especially difficult for language models to capture and yet we found powerful alignment between GPT-3.5 and human judgments.

We emphasize that this finding is just one anecdote and we do not make any strong claims about the extent to which LLMs make human-like judgments, moral or otherwise. Language models also might be especially good at predicting moral judgments because moral judgments heavily hinge on the structural features of scenarios, including the presence of an intentional agent, the causation of damage, and a vulnerable victim, features that language models may have an easy time detecting.  However, the results are intriguing.

Other researchers have empirically demonstrated GPT-3’s ability to simulate human participants in domains beyond moral judgments, including predicting voting choices, replicating behavior in economic games, and displaying human-like problem solving and heuristic judgments on scenarios from cognitive
psychology. LLM studies have also replicated classic social science findings including the Ultimatum Game and the Milgram experiment. One company (http://syntheticusers.com) is expanding on these
findings, building infrastructure to replace human participants and offering ‘synthetic AI participants’
for studies.

(cut)

From Caveats and looking ahead

Language models may be far from human, but they are trained on a tremendous corpus of human expression and thus they could help us learn about human judgments. We encourage scientists to compare simulated language model data with human data to see how aligned they are across different domains and populations.  Just as language models like GPT may help to give insight into human judgments, comparing LLMs with human judgments can teach us about the machine minds of LLMs; for example, shedding light on their ethical decision making.

Lurking under the specific concerns about the usefulness of AI language models as participants is an age-old question: can AI ever be human enough to replace humans? On the one hand, critics might argue that AI participants lack the rationality of humans, making judgments that are odd, unreliable, or biased. On the other hand, humans are odd, unreliable, and biased – and other critics might argue that AI is just too sensible, reliable, and impartial.  What is the right mix of rational and irrational to best capture a human participant?  Perhaps we should ask a big sample of human participants to answer that question. We could also ask GPT.

Tuesday, May 30, 2023

Are We Ready for AI to Raise the Dead?

Jack Holmes
Esquire Magazine
Originally posted 4 May 24

Here is an excerpt:

You can see wonderful possibilities here. Some might find comfort in hearing their mom’s voice, particularly if she sounds like she really sounded and gives the kind of advice she really gave. But Sandel told me that when he presents the choice to students in his ethics classes, the reaction is split, even as he asks in two different ways. First, he asks whether they’d be interested in the chatbot if their loved one bequeathed it to them upon their death. Then he asks if they’d be interested in building a model of themselves to bequeath to others. Oh, and what if a chatbot is built without input from the person getting resurrected? The notion that someone chose to be represented posthumously in a digital avatar seems important, but even then, what if the model makes mistakes? What if it misrepresents—slanders, even—the dead?

Soon enough, these questions won’t be theoretical, and there is no broad agreement about whom—or even what—to ask. We’re approaching a more fundamental ethical quandary than we often hear about in discussions around AI: human bias embedded in algorithms, privacy and surveillance concerns, mis- and disinformation, cheating and plagiarism, the displacement of jobs, deepfakes. These issues are really all interconnected—Osama bot Laden might make the real guy seem kinda reasonable or just preach jihad to tweens—and they all need to be confronted. We think a lot about the mundane (kids cheating in AP History) and the extreme (some advanced AI extinguishing the human race), but we’re more likely to careen through the messy corridor in between. We need to think about what’s allowed and how we’ll decide.

(cut)

Our governing troubles are compounded by the fact that, while a few firms are leading the way on building these unprecedented machines, the technology will soon become diffuse. More of the codebase for these models is likely to become publicly available, enabling highly talented computer scientists to build their own in the garage. (Some folks at Stanford have already built a ChatGPT imitator for around $600.) What happens when some entrepreneurial types construct a model of a dead person without the family’s permission? (We got something of a preview in April when a German tabloid ran an AI-generated interview with ex–Formula 1 driver Michael Schumacher, who suffered a traumatic brain injury in 2013. His family threatened to sue.) What if it’s an inaccurate portrayal or it suffers from what computer scientists call “hallucinations,” when chatbots spit out wildly false things? We’ve already got revenge porn. What if an old enemy constructs a false version of your dead wife out of spite? “There’s an important tension between open access and safety concerns,” Reich says. “Nuclear fusion has enormous upside potential,” too, he adds, but in some cases, open access to the flesh and bones of AI models could be like “inviting people around the world to play with plutonium.”


Yes, there was a Black Mirror episode (Be Right Back) about this issue.  The wiki is here.