Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Saturday, June 10, 2023

Generative AI entails a credit–blame asymmetry

Porsdam Mann, S., Earp, B. et al. (2023).
Nature Machine Intelligence.

The recent releases of large-scale language models (LLMs), including OpenAI’s ChatGPT and GPT-4, Meta’s LLaMA, and Google’s Bard have garnered substantial global attention, leading to calls for urgent community discussion of the ethical issues involved. LLMs generate text by representing and predicting statistical properties of language. Optimized for statistical patterns and linguistic form rather than for
truth or reliability, these models cannot assess the quality of the information they use.

Recent work has highlighted ethical risks that are associated with LLMs, including biases that arise from training data; environmental and socioeconomic impacts; privacy and confidentiality risks; the perpetuation of stereotypes; and the potential for deliberate or accidental misuse. We focus on a distinct set of ethical questions concerning moral responsibility—specifically blame and credit—for LLM-generated
content. We argue that different responsibility standards apply to positive and negative uses (or outputs) of LLMs and offer preliminary recommendations. These include: calls for updated guidance from policymakers that reflect this asymmetry in responsibility standards; transparency norms; technology goals; and the establishment of interactive forums for participatory debate on LLMs.‌

(cut)

Credit–blame asymmetry may lead to achievement gaps

Since the Industrial Revolution, automating technologies have made workers redundant in many industries, particularly in agriculture and manufacturing. The recent assumption25 has been that creatives
and knowledge workers would remain much less impacted by these changes in the near-to-mid-term future. Advances in LLMs challenge this premise.

How these trends will impact human workforces is a key but unresolved question. The spread of AI-based applications and tools such as LLMs will not necessarily replace human workers; it may simply
shift them to tasks that complement the functions of the AI. This may decrease opportunities for human beings to distinguish themselves or excel in workplace settings. Their future tasks may involve supervising or maintaining LLMs that produce the sorts of outputs (for example, text or recommendations) that skilled human beings were previously producing and for which they were receiving credit. Consequently, work in a world relying on LLMs might often involve ‘achievement gaps’ for human beings: good, useful outcomes will be produced, but many of them will not be achievements for which human workers and professionals can claim credit.

This may result in an odd state of affairs. If responsibility for positive and negative outcomes produced by LLMs is asymmetrical as we have suggested, humans may be justifiably held responsible for negative outcomes created, or allowed to happen, when they or their organizations make use of LLMs. At the same time, they may deserve less credit for AI-generated positive outcomes, as they may not be displaying the skills and talents needed to produce text, exerting judgment to make a recommendation, or generating other creative outputs.

Friday, June 9, 2023

Undervaluing the Positive Impact of Kindness Starts Early

Echelbarger, M., & Epley, N. (2023, April 4).
psyarxiv.com

Abstract

Prosociality can create social connections that increase well-being among both givers and recipients, yet concerns about how another person might respond can make people reluctant to act prosocially. Existing research suggests these concerns may be miscalibrated such that people underestimate the positive impact their prosociality will have on recipients. Understanding when miscalibrated expectations emerge in development is critical for understanding when misplaced cognitive barriers might discourage social engagement, and for understanding when interventions to build relationships could begin. Two experiments asking children (aged 8-17, Experiment 1; aged 4-7, Experiment 2) and adults to perform the same random act of kindness for another person document that both groups significantly underestimate how “big” the act of kindness will seem to recipients, and how positive their act will make recipients feel. Participants significantly undervalued the positive impact of prosociality across ages. Miscalibrated psychological barriers to social connection may emerge early in life.

Public Significance Statement:

Prosociality tends to increase well-being among those performing the prosocial action as well as among those receiving the action.And yet, people may be somewhat reluctant to act prosocially out of concerns about how another person might respond.In two experiments involving children (4-7 years old), adolescents (8-17 years old), and adults, we find that people’s concerns tend to be systematically miscalibrated such that they underestimate how positively others will respond to their prosocial act. The degree of miscalibration is not moderated by age. This suggests that miscalibrated social cognition could make people overly reluctant to behave prosocially, that miscalibrated expectations emerge early in development, and that overcoming these social cognitive barriers could potentially increase well-being across the lifespan.

General Discussion

Positive social connections are critical for happiness and health among children, adolescents, and adults alike, and yet reaching out to connect with others can sometimes be hindered by concerns about how a recipient might respond. A growing body of recent research indicates that people consistently underestimate how positively prosocial actions will make a recipient feel, meaning that social cognition may create a misplaced barrier to social connection(Epley et al., 2022). Two experiments using a novel kindness procedure indicated that miscalibrated expectations arise early in development. In our experiments, children as young as 4-7years along with adults underestimated how much their recipient would value their prosocial act and feel positive afterwards. Psychological barriers to prosocial behavior appear to emerge early and persist into adulthood.

This may seem somewhat surprising given that experience with prosociality would presumably calibrate expectations. However, people can learn from their experience only when they have experience to learn from. Expectations that encourage avoidance may keep people from having the very experiences that would calibrate their expectations(Epley et al., 2022). In addition, people may not recognize the positive impact they have had on another person even after going through an interaction with them. Strangers who have just had a conversation tend to underestimate how much their partner actually liked them (Boothby et al., 2018), another social cognitive bias that has recently been documented in young children over age 5 as well (but not among 4-year-olds; Wolf et al., 2021). These results suggest that even if the givers in our experiments had been able to interact with their recipient while performing their act of kindness, they may still not have been able to recognize how positive recipients felt. Future research should examine how people do, or do not, learn from their experiences in ways that could maintain miscalibrated social expectations, and also examine outcomes across a range of other prosocial acts and among a larger sample of children and adolescents than we studied here.

Thursday, June 8, 2023

Do Moral Beliefs Motivate Action?

Díaz, R.
Ethic Theory Moral Prac (2023).

Abstract

Do moral beliefs motivate action? To answer this question, extant arguments have considered hypothetical cases of association (dissociation) between agents’ moral beliefs and actions. In this paper, I argue that this approach can be improved by studying people’s actual moral beliefs and actions using empirical research methods. I present three new studies showing that, when the stakes are high, associations between participants’ moral beliefs and actions are actually explained by co-occurring but independent moral emotions. These findings suggest that moral beliefs themselves have little or no motivational force, supporting the Humean picture of moral motivation.

Conclusion

In this paper, I showed that the use of hypothetical cases to extract conclusions regarding the (lack of) motivational power of moral beliefs faces important limitations. I argued that these limitations can be addressed using empirical research tools, and presented a series of studies doing so.

The results of the studies show that, when the stakes are high, the apparent motivational force of beliefs is in fact explained by co-occurring moral emotions. This supports Humean views of moral motivation. The results regarding low-stake situations, however, are open to both Humean and “watered-down” Anti-Humean interpretations.

In moral practice, it probably won’t matter if moral beliefs don’t motivate us much or at all. Arguably, most real-life moral choices involve countervailing motives with more than a little motivational strength, making moral beliefs irrelevant in any case. However, the situation might be different with regards to ethical theory. Accepting that moral beliefs have some motivational force (even if very low) could be enough to solve the Moral Problem (see Introduction)Footnote33 while rejecting that moral beliefs have motivational force would prompt us to reject one of the other claims involved in the puzzle. Future research should help us decide between competing interpretations of the results regarding low-stakes situations presented in this paper.

Overall, the results presented in this paper put pressure on Anti-Humean views of moral motivation, as they suggest that moral beliefs have little or no motivational force.

With regards to methodology, I showed that using empirical research tools improves upon the use of hypothetical cases of moral motivation by ruling out alternative interpretations. Note, however, that the empirical investigations presented in this paper build on extant hypothetical cases and the logical tools involved in the discussion of these cases. In this sense, the studies presented in this paper do not oppose, but rather continue extant work regarding cases. Hopefully, this paper paves the way for more empirical investigations, as well as discussions on the best ways to measure and test the relations between moral behavior, moral beliefs, and moral emotions.

Wednesday, June 7, 2023

AI machines aren’t ‘hallucinating’. But their makers are

Naomi Klein
The Guardian
Originally published 8 May 23

Inside the many debates swirling around the rapid rollout of so-called artificial intelligence, there is a relatively obscure skirmish focused on the choice of the word “hallucinate”.

This is the term that architects and boosters of generative AI have settled on to characterize responses served up by chatbots that are wholly manufactured, or flat-out wrong. Like, for instance, when you ask a bot for a definition of something that doesn’t exist and it, rather convincingly, gives you one, complete with made-up footnotes. “No one in the field has yet solved the hallucination problems,” Sundar Pichai, the CEO of Google and Alphabet, told an interviewer recently.

That’s true – but why call the errors “hallucinations” at all? Why not algorithmic junk? Or glitches? Well, hallucination refers to the mysterious capacity of the human brain to perceive phenomena that are not present, at least not in conventional, materialist terms. By appropriating a word commonly used in psychology, psychedelics and various forms of mysticism, AI’s boosters, while acknowledging the fallibility of their machines, are simultaneously feeding the sector’s most cherished mythology: that by building these large language models, and training them on everything that we humans have written, said and represented visually, they are in the process of birthing an animate intelligence on the cusp of sparking an evolutionary leap for our species. How else could bots like Bing and Bard be tripping out there in the ether?

(cut)

Hallucination #3: tech giants can be trusted not to break the world

Asked if he is worried about the frantic gold rush ChatGPT has already unleashed, Altman said he is, but added sanguinely: “Hopefully it will all work out.” Of his fellow tech CEOs – the ones competing to rush out their rival chatbots – he said: “I think the better angels are going to win out.”

Better angels? At Google? I’m pretty sure the company fired most of those because they were publishing critical papers about AI, or calling the company out on racism and sexual harassment in the workplace. More “better angels” have quit in alarm, most recently Hinton. That’s because, contrary to the hallucinations of the people profiting most from AI, Google does not make decisions based on what’s best for the world – it makes decisions based on what’s best for Alphabet’s shareholders, who do not want to miss the latest bubble, not when Microsoft, Meta and Apple are already all in.

(cut)

Is all of this overly dramatic? A stuffy and reflexive resistance to exciting innovation? Why expect the worse? Altman reassures us: “Nobody wants to destroy the world.” Perhaps not. But as the ever-worsening climate and extinction crises show us every day, plenty of powerful people and institutions seem to be just fine knowing that they are helping to destroy the stability of the world’s life-support systems, so long as they can keep making record profits that they believe will protect them and their families from the worst effects. Altman, like many creatures of Silicon Valley, is himself a prepper: back in 2016, he boasted: “I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force and a big patch of land in Big Sur I can fly to.”

Tuesday, June 6, 2023

Merging Minds: The Conceptual and Ethical Impacts of Emerging Technologies for Collective Minds

Lyreskog, D.M., Zohny, H., Savulescu, J. et al.
Neuroethics 16, 12 (2023).

Abstract

A growing number of technologies are currently being developed to improve and distribute thinking and decision-making. Rapid progress in brain-to-brain interfacing and swarming technologies promises to transform how we think about collective and collaborative cognitive tasks across domains, ranging from research to entertainment, and from therapeutics to military applications. As these tools continue to improve, we are prompted to monitor how they may affect our society on a broader level, but also how they may reshape our fundamental understanding of agency, responsibility, and other key concepts of our moral landscape.

In this paper we take a closer look at this class of technologies – Technologies for Collective Minds – to see not only how their implementation may react with commonly held moral values, but also how they challenge our underlying concepts of what constitutes collective or individual agency. We argue that prominent contemporary frameworks for understanding collective agency and responsibility are insufficient in terms of accurately describing the relationships enabled by Technologies for Collective Minds, and that they therefore risk obstructing ethical analysis of the implementation of these technologies in society. We propose a more multidimensional approach to better understand this set of technologies, and to facilitate future research on the ethics of Technologies for Collective Minds.

(cut)

A new field

In this paper, we have argued that new and emerging TCMs challenge commonly held views on collective and joint actions in such a way that our conceptual and ethical frameworks appear unsuitable this domain. This inadequacy hinders both conceptual analysis and ethical assessment, and we are therefore in urgent need of a conceptual overhaul which facilitates rather than obstructs ethical assessment. In this paper, we have but taken the first steps to bring about this overhaul: while our four categories – DigiMinds, UniMinds, NetMinds and MacroMinds – can help us think about the dimensions of Collective Minds and their ethical implications, it remains an open question how we should treat TCMs, and which aspects of them are most ethically salient, as this will depend on a number of parameters, including (A) the technological specifications of any TCM, (B) the domain in which said TCM is deployed, (military, medicine, research, entertainment, etc.) and (C) reversibility (i.e. whether joining a given Collective Mind is permanent, or risk leaving significant permanent impacts). It is also worth recalling that these four categories, while based on technological capacities, are only conceptual tools to help navigate the ethical landscapes of Collective Minds. What we are likely to see in the coming years is the emergence of TCMs which do not easily lend themselves to be clearly boxed into any of these four categories, under descriptions such as “Cloudminds”, “Mindplexes”, or “Decentralized Selves”.

In anticipating and assessing the ethical impacts of Collective Minds, we propose that we move beyond binary approaches to thinking about agency and responsibility (i.e. that they are either individual or collective), and that frameworks focus attention instead on the specifics of ABCs as stated above. Furthermore, we stress the need to fluently and continuously refine conceptual tools to encompass those specifics, to adapt our ethical frameworks with equal agility.

Monday, June 5, 2023

Why Conscious AI Is a Bad, Bad Idea

Anil Seth
Nautilus.us
Originally posted 9 MAY 23

Artificial intelligence is moving fast. We can now converse with large language models such as ChatGPT as if they were human beings. Vision models can generate award-winning photographs as well as convincing videos of events that never happened. These systems are certainly getting smarter, but are they conscious? Do they have subjective experiences, feelings, and conscious beliefs in the same way that you and I do, but tables and chairs and pocket calculators do not? And if not now, then when—if ever—might this happen?

While some researchers suggest that conscious AI is close at hand, others, including me, believe it remains far away and might not be possible at all. But even if unlikely, it is unwise to dismiss the possibility altogether. The prospect of artificial consciousness raises ethical, safety, and societal challenges significantly beyond those already posed by AI. Importantly, some of these challenges arise even when AI systems merely seem to be conscious, even if, under the hood, they are just algorithms whirring away in subjective oblivion.

(cut)

There are two main reasons why creating artificial consciousness, whether deliberately or inadvertently, is a very bad idea. The first is that it may endow AI systems with new powers and capabilities that could wreak havoc if not properly designed and regulated. Ensuring that AI systems act in ways compatible with well-specified human values is hard enough as things are. With conscious AI, it gets a lot more challenging, since these systems will have their own interests rather than just the interests humans give them.

The second reason is even more disquieting: The dawn of conscious machines will introduce vast new potential for suffering in the world, suffering we might not even be able to recognize, and which might flicker into existence in innumerable server farms at the click of a mouse. As the German philosopher Thomas Metzinger has noted, this would precipitate an unprecedented moral and ethical crisis because once something is conscious, we have a responsibility toward its welfare, especially if we created it. The problem wasn’t that Frankenstein’s creature came to life; it was that it was conscious and could feel.

These scenarios might seem outlandish, and it is true that conscious AI may be very far away and might not even be possible. But the implications of its emergence are sufficiently tectonic that we mustn’t ignore the possibility. Certainly, nobody should be actively trying to create machine consciousness.

Existential concerns aside, there are more immediate dangers to deal with as AI has become more humanlike in its behavior. These arise when AI systems give humans the unavoidable impression that they are conscious, whatever might be going on under the hood. Human psychology lurches uncomfortably between anthropocentrism—putting ourselves at the center of everything—and anthropomorphism—projecting humanlike qualities into things on the basis of some superficial similarity. It is the latter tendency that’s getting us in trouble with AI.

Sunday, June 4, 2023

We need to examine the beliefs of today’s tech luminaries

Anjana Ahuja
Financial Times
Originally posted 10 MAY 23

People who are very rich or very clever, or both, sometimes believe weird things. Some of these beliefs are captured in the acronym Tescreal. The letters represent overlapping futuristic philosophies — bookended by transhumanism and longtermism — favoured by many of AI’s wealthiest and most prominent supporters.

The label, coined by a former Google ethicist and a philosopher, is beginning to circulate online and usefully explains why some tech figures would like to see the public gaze trained on fuzzy future problems such as existential risk, rather than on current liabilities such as algorithmic bias. A fraternity that is ultimately committed to nurturing AI for a posthuman future may care little for the social injustices committed by their errant infant today.

As well as transhumanism, which advocates for the technological and biological enhancement of humans, Tescreal encompasses extropianism, a belief that science and technology will bring about indefinite lifespan; singularitarianism, the idea that an artificial superintelligence will eventually surpass human intelligence; cosmism, a manifesto for curing death and spreading outwards into the cosmos; rationalism, the conviction that reason should be the supreme guiding principle for humanity; effective altruism, a social movement that calculates how to maximally benefit others; and longtermism, a radical form of utilitarianism which argues that we have moral responsibilities towards the people who are yet to exist, even at the expense of those who currently do.

(cut, and the ending)

Gebru, along with others, has described such talk as fear-mongering and marketing hype. Many will be tempted to dismiss her views — she was sacked from Google after raising concerns over energy use and social harms linked to large language models — as sour grapes, or an ideological rant. But that glosses over the motivations of those running the AI show, a dazzling corporate spectacle with a plot line that very few are able to confidently follow, let alone regulate.

Repeated talk of a possible techno-apocalypse not only sets up these tech glitterati as guardians of humanity, it also implies an inevitability in the path we are taking. And it distracts from the real harms racking up today, identified by academics such as Ruha Benjamin and Safiya Noble. Decision-making algorithms using biased data are deprioritising black patients for certain medical procedures, while generative AI is stealing human labour, propagating misinformation and putting jobs at risk.

Perhaps those are the plot twists we were not meant to notice.


Saturday, June 3, 2023

The illusion of the mind–body divide is attenuated in males.

Berent, I. 
Sci Rep 13, 6653 (2023).
https://doi.org/10.1038/s41598-023-33079-1

Abstract

A large literature suggests that people are intuitive Dualists—they tend to perceive the mind as ethereal, distinct from the body. Here, we ask whether Dualism emanates from within the human psyche, guided, in part, by theory of mind (ToM). Past research has shown that males are poorer mind-readers than females. If ToM begets Dualism, then males should exhibit weaker Dualism, and instead, lean towards Physicalism (i.e., they should view bodies and minds alike). Experiments 1–2 show that males indeed perceive the psyche as more embodied—as more likely to emerge in a replica of one’s body, and less likely to persist in its absence (after life). Experiment 3 further shows that males are less inclined towards Empiricism—a putative byproduct of Dualism. A final analysis confirms that males’ ToM scores are lower, and ToM scores further correlate with embodiment intuitions (in Experiments 1–2). These observations (from Western participants) cannot establish universality, but the association of Dualism with ToM suggests its roots are psychological. Thus, the illusory mind–body divide may arise from the very workings of the human mind.

Discussion

People tend to consider the mind as ethereal, distinct from the body. This intuitive Dualist stance has been demonstrated in adults and children, in Western and non-Western participants and its consequences on reasoning are widespread.

Why people are putative Dualists, however, is unclear. In particular, one wonders whether Dualism arises only by cultural transmission, or whether the illusion of the mind–body divide can also emerge naturally, from ToM.

To address this question, here, we investigated whether individual differences in ToM capacities, occurring within the neurotypical population—between males and females—are linked to Dualism. Experiments 1–2 show that this is indeed the case.

Males, in this sample, considered the psyche as more strongly embodied than females: they believed that epistemic states are more likely to emerge in a replica of one’s body (in Experiment 1) and that psychological traits are less likely to persist upon the body’s demise, in the afterlife (in Experiment 2). Experiment 3 further showed that males are also more likely to consider psychological traits as innate—this is expected by past findings, suggesting that Dualism begets Empiricism.

A follow-up analysis has confirmed that these differences in reasoning about bodies and minds are linked to ToM. Not only did males in this sample score lower than females on ToM, but their ToM scores correlated with their Dualist intuitions.

As noted, these results ought to be interpreted with caution, as the gender differences observed here may not hold universally, and it certainly does not speak to the reasoning of any individual person. And indeed, ToM abilities demonstrably depend on multiple factors, including linguistic experience and culture. But inasmuch as females show superior ToM, they ought to lean towards Dualism and Empiricism. Dualism, then, is linked to ToM.

Friday, June 2, 2023

Is it good to feel bad about littering? Conflict between moral beliefs and behaviors for everyday transgressions

Schwartz, Stephanie A. and Inbar, Yoel
SSRN.
Originally posted 22 June 22

Abstract

People sometimes do things that they think are morally wrong. We investigate how actor’s perceptions of the morality of their own behaviors affects observers’ evaluations. In Study 1 (n = 302), we presented participants with six different descriptions of actors who routinely engaged in a morally questionable behavior and varied whether the actors thought the behavior was morally wrong. Actors who believed their behavior was wrong were seen as having better moral character, but their behavior was rated as more wrong. In Study 2 (n = 391) we investigated whether perceptions of actor metadesires were responsible for the effects of actor beliefs on judgments. We used the same stimuli and measures as in Study 1 but added a measure of the actor’s perceived desires to engage in the behaviors. As predicted, the effect of actors’ moral beliefs on judgments of their behavior and moral character was mediated by perceived metadesires.

General Discussion

In two studies, we find that actors’ beliefs about their own everyday immoral behaviors affect both how the acts and the actors are evaluated—albeit in opposite directions. An actor’s belief that his or her act is morally wrong causes observers to see the act itself as less morally acceptable, while, at the same time, it leads to more positive character judgments of the actor. In Study 2, we find that these differences in character judgments are mediated by people’s perceptions of the actor’s metadesires. Actors who see their behavior as morally wrong are presumed to have a desire not to engage in it, and this in turn leads to more positive evaluations of their character. These results suggest that one benefit of believing one’s own behavior to be immoral is that others—if they know this—will evaluate one’s character more positively.

(cut)

Honest Hypocrites 

In research on moral judgments of hypocrites, Jordan et al. (2017) found that people who publicly espouse a moral standard that they privately violate are judged particularly negatively.  However, they also found that “honest hypocrites” (those who publicly condemn a behavior while admitting they engage in it themselves) are judged more positively than traditional hypocrites and equivalently to control transgressors (people who simply engage in the negative behavior without taking a public stand on its acceptability). This might seem to contradict our findings in the current studies, where people who transgressed despite thinking that the behavior was morally wrong were judged more positively than those who simply transgressed. We believe the key distinction that explains the difference between Jordan et al.’s results and ours is that in their paradigm, hypocrites publicly condemned others for engaging in the behavior in question.  As Jordan et al. show, public condemnation is interpreted as a strong signal that someone is unlikely to engage in that behavior themselves; hypocrites therefore are disliked both for
engaging in a negative behavior and for falsely signaling (by their public condemnation) that they wouldn’t. Honest hypocrites, who explicitly state that they engage in the negative behavior, are not falsely signaling. However, Jordan et al.’s scenarios imply to participants that honest hypocrites do condemn others—something that may strike people as unfair coming from a person who engages in the behavior themselves. Thus, honest hypocrites may be penalized for public condemnation, even as they are credited for more positive metadesires. In contrast, in our studies participants were told that the scenario protagonists thought the behavior was morally wrong but not that they publicly condemned anyone else for engaging in it. This may have allowed protagonists to benefit from more positive perceived metadesires without being penalized for public condemnation. This explanation is admittedly speculative but could be tested in future research that we outline below.


Suppose you do something bad. Will people blame you more if you knew it was wrong? Or will they blame you less?

The answer seems to be: They will think your act is more wrong, but your character is less bad.