Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Tuesday, October 31, 2023

Which Humans?

Atari, M., Xue, M. J.et al.
(2023, September 22).
https://doi.org/10.31234/osf.io/5b26t

Abstract

Large language models (LLMs) have recently made vast advances in both generating and analyzing textual data. Technical reports often compare LLMs’ outputs with “human” performance on various tests. Here, we ask, “Which humans?” Much of the existing literature largely ignores the fact that humans are a cultural species with substantial psychological diversity around the globe that is not fully captured by the textual data on which current LLMs have been trained. We show that LLMs’ responses to psychological measures are an outlier compared with large-scale cross-cultural data, and that their performance on cognitive psychological tasks most resembles that of people from Western, Educated, Industrialized, Rich, and Democratic (WEIRD) societies but declines rapidly as we move away from these populations (r = -.70). Ignoring cross-cultural diversity in both human and machine psychology raises numerous scientific and ethical issues. We close by discussing ways to mitigate the WEIRD bias in future generations of generative language models.

My summary:

The authors argue that much of the existing literature on LLMs largely ignores the fact that humans are a cultural species with substantial psychological diversity around the globe. This diversity is not fully captured by the textual data on which current LLMs have been trained.

For example, LLMs are often evaluated on their ability to complete tasks such as answering trivia questions, generating creative text formats, and translating languages. However, these tasks are all biased towards the cultural context of the data on which the LLMs were trained. This means that LLMs may perform well on these tasks for people from certain cultures, but poorly for people from other cultures.

Atari and his co-authors argue that it is important to be aware of this bias when interpreting the results of LLM evaluations. They also call for more research on the performance of LLMs across different cultures and demographics.

One specific example they give is the use of LLMs to generate creative text formats, such as poems and code. They argue that LLMs that are trained on a dataset of text from English-speaking countries are likely to generate creative text that is more culturally relevant to those countries. This could lead to bias and discrimination against people from other cultures.

Atari and his co-authors conclude by calling for more research on the following questions:
  • How do LLMs perform on different tasks across different cultures and demographics?
  • How can we develop LLMs that are less biased towards the cultural context of their training data?
  • How can we ensure that LLMs are used in a way that is fair and equitable for all people?

Monday, October 30, 2023

The Mental Health Crisis Among Doctors Is a Problem for Patients

Keren Landman
vox.com
Originally posted 25 OCT 23

Here is an excerpt:

What’s causing such high levels of mental distress among doctors?

Physicians have high rates of mental distress — and they’re only getting higher. One 2023 survey found six out of 10 doctors often had feelings of burnout, compared to four out of 10 pre-pandemic. In a separate 2023 study, nearly a quarter of doctors said they were depressed.

Physicians die by suicide at rates higher than the general population, with women’s risk twice as high as men’s. In a 2022 survey, one in 10 doctors said they’d thought about or attempted suicide.

Not all doctors are at equal risk: Primary care providers — like emergency medicine, internal medicine, and pediatrics practitioners — are most likely to say they’re burned out, and female physicians experience burnout at higher rates than male physicians.

(It’s worth noting that other health care professionals — perhaps most prominently nurses — also face high levels of mental distress. But because nurses are more frequently unionized than doctors and because their professional culture isn’t the same as doctor culture, the causes and solutions are also somewhat different.)


Here is my summary:

The article discusses the mental health crisis among doctors and its implications for patients. It notes that doctors are at a higher risk of suicide than any other profession, and that they also experience high rates of burnout and depression.

The mental health crisis among doctors is a problem for patients because it can lead to impaired judgment, medical errors, and reduced quality of care. Additionally, the stigma associated with mental illness can prevent doctors from seeking the help they need, which can further exacerbate the problem.

The article concludes by calling for more attention to the mental health of doctors and for more resources to be made available to help them.

I treat a number of physicians in my practice.

Sunday, October 29, 2023

We Can't Compete With AI Girlfriends

Freya India
Medium.com
Originally published 14 September 23

Here isn an excerpt:

Of course most people are talking about what this means for men, given they make up the vast majority of users. Many worry about a worsening loneliness crisis, a further decline in sex rates, and ultimately the emergence of “a new generation of incels” who depend on and even verbally abuse their virtual girlfriends. Which is all very concerning. But I wonder, if AI girlfriends really do become as pervasive as online porn, what this will mean for girls and young women? Who feel they need to compete with this?

Most obvious to me is the ramping up of already unrealistic beauty standards. I know conservatives often get frustrated with feminists calling everything unattainable, and I agree they can go too far — but still, it’s hard to deny that the pressure to look perfect today is unlike anything we’ve ever seen before. And I don’t think that’s necessarily pressure from men but I do very much think it’s pressure from a network of profit-driven industries that take what men like and mangle it into an impossible ideal. Until the pressure isn’t just to be pretty but filtered, edited and surgically enhanced to perfection. Until the most lusted after women in our culture look like virtual avatars. And until even the most beautiful among us start to be seen as average.

Now add to all that a world of fully customisable AI girlfriends, each with flawless avatar faces and cartoonish body proportions. Eva AI’s Dream Girl Builder, for example, allows users to personalise every feature of their virtual girlfriend, from face style to butt size. Which could clearly be unhealthy for men who already have warped expectations. But it’s also unhealthy for a generation of girls already hating how they look, suffering with facial and body dysmorphia, and seeking cosmetic surgery in record numbers. Already many girls feel as if they are in constant competition with hyper-sexualised Instagram influencers and infinitely accessible porn stars. Now the next generation will grow up not just with all that but knowing the boys they like can build and sext their ideal woman, and feeling as if they must constantly modify themselves to compete. I find that tragic.


Summary:

The article discusses the growing trend of AI girlfriends and the potential dangers associated with their proliferation. It mentions that various startups are creating romantic chatbots capable of explicit conversations and sexual content, with millions of users downloading such apps. While much of the concern focuses on the impact on men, the article also highlights the negative consequences this trend may have on women, particularly in terms of unrealistic beauty standards and emotional expectations. The author expresses concerns about young girls feeling pressured to compete with AI girlfriends and the potential harm to self-esteem and body image. The article raises questions about the impact of AI girlfriends on real relationships and emotional intimacy, particularly among younger generations. It concludes with a glimmer of hope that people may eventually reject the artificial in favor of authentic human interactions.

The article raises valid concerns about the proliferation of AI girlfriends and their potential societal impacts. It is indeed troubling to think about the unrealistic beauty and emotional standards that these apps may reinforce, especially among young girls and women. The pressure to conform to these virtual ideals can undoubtedly have damaging effects on self-esteem and mental well-being.

The article also highlights concerns about the potential substitution of real emotional intimacy with AI companions, particularly among a generation that is already grappling with social anxieties and less real-world human interaction. This raises important questions about the long-term consequences of such technologies on relationships and societal dynamics.

However, the article's glimmer of optimism suggests that people may eventually realize the value of authentic, imperfect human interactions. This point is essential, as it underscores the potential for a societal shift away from excessive reliance on AI and towards more genuine connections.

In conclusion, while AI girlfriends may offer convenience and instant gratification, they also pose significant risks to societal norms and emotional well-being. It is crucial for individuals and society as a whole to remain mindful of these potential consequences and prioritize real human connections and authenticity.

Saturday, October 28, 2023

Meaning from movement and stillness: Signatures of coordination dynamics reveal infant agency

Sloan, A. T., Jones, N. A., et al. (2023).
PNAS, 120 (39) e2306732120

Abstract

How do human beings make sense of their relation to the world and realize their ability to effect change? Applying modern concepts and methods of coordination dynamics, we demonstrate that patterns of movement and coordination in 3 to 4-mo-olds may be used to identify states and behavioral phenotypes of emergent agency. By means of a complete coordinative analysis of baby and mobile motion and their interaction, we show that the emergence of agency can take the form of a punctuated self-organizing process, with meaning found both in movement and stillness.

Significance

Revamping one of the earliest paradigms for the investigation of infant learning, and moving beyond reinforcement accounts, we show that the emergence of agency in infants can take the form of a bifurcation or phase transition in a dynamical system that spans the baby, the brain, and the environment. Individual infants navigate functional coupling with the world in different ways, suggesting that behavioral phenotypes of agentive discovery exist—and dynamics provides a means to identify them. This phenotyping method may be useful for identifying babies at risk.

Here is my take:

Importantly, researchers found that the emergence of agency can take the form of a punctuated self-organizing process, with meaning found both in movement and stillness.

The findings of this study suggest that infants are not simply passive observers of the world around them, but rather active participants in their own learning and development. The researchers believe that their work could have implications for the early identification of infants at risk for developmental delays.

Here are some of the key takeaways from the study:
  • Infants learn to make sense of their relation to the world through their movement and interaction with their environment.
  • The emergence of agency is a punctuated, self-organizing process that occurs in both movement and stillness.
  • Individual infants navigate functional coupling with the world in different ways, suggesting that behavioral phenotypes of agentive discovery exist.
  • Dynamics provides a means to identify behavioral phenotypes of agentive discovery, which may be useful for identifying babies at risk.
  • This study is a significant contribution to our understanding of how infants learn and develop. It provides new insights into the role of movement and stillness in the emergence of agency and consciousness. The findings of this study have the potential to improve our ability to identify and support infants at risk for developmental delays.

Friday, October 27, 2023

Theory of consciousness branded 'pseudoscience' by neuroscientists

Clare Wilson
New Scientist
Originally posted 19 Sept 23

Consciousness is one of science’s deepest mysteries; it is considered so difficult to explain how physical entities like brain cells produce subjective sensory experiences, such as the sensation of seeing the colour red, that this is sometimes called “the hard problem” of science.

While the question has long been investigated by studying the brain, IIT came from considering the mathematical structure of information-processing networks and could also apply to animals or artificial intelligence.

It says that a network or system has a higher level of consciousness if it is more densely interconnected, such that the interactions between its connection points or nodes yield more information than if it is reduced to its component parts.

IIT predicts that it is theoretically possible to calculate a value for the level of consciousness, termed phi, of any network with known structure and functioning. But as the number of nodes within a network grows, the sums involved get exponentially bigger, meaning that it is practically impossible to calculate phi for the human brain – or indeed any information-processing network with more than about 10 nodes.

(cut)

Giulio Tononi at the University of Wisconsin-Madison, who first developed IIT and took part in the recent testing, did not respond to New Scientist’s requests for comment. But Johannes Fahrenfort at VU Amsterdam in the Netherlands, who was not involved in the recent study, says the letter went too far. “There isn’t a lot of empirical support for IIT. But that doesn’t warrant calling it pseudoscience.”

Complicating matters, there is no single definition of pseudoscience. But ITT is not in the same league as astrology or homeopathy, says James Ladyman at the University of Bristol in the UK. “It looks like a serious attempt to understand consciousness. It doesn’t make it a theory pseudoscience just because some people are making exaggerated claims.”


Summary:

A group of 124 neuroscientists, including prominent figures in the field, have criticized the integrated information theory (IIT) of consciousness in an open letter. They argue that recent experimental evidence said to support IIT didn't actually test its core ideas and is practically impossible to perform. IIT suggests that the level of consciousness, called "phi," can be calculated for any network with known structure and functioning, but this becomes impractical for networks with many nodes, like the human brain. Some critics believe that IIT has been overhyped and may have unintended consequences for policies related to consciousness in fetuses and animals. However, not all experts consider IIT pseudoscience, with some seeing it as a serious attempt to understand consciousness.

The debate surrounding the integrated information theory (IIT) of consciousness is a complex one. While it's clear that the recent experimental evidence has faced criticism for not directly testing the core ideas of IIT, it's important to recognize that the study of consciousness is a challenging and ongoing endeavor.

Consciousness is indeed one of science's profound mysteries, often referred to as "the hard problem." IIT, in its attempt to address this problem, has sparked valuable discussions and research. It may not be pseudoscience, but the concerns raised about overhyping its findings are valid. It's crucial for scientific theories to be communicated accurately to avoid misinterpretation and potential policy implications.

Ultimately, the study of consciousness requires a multidisciplinary approach and the consideration of various theories, and it's important to maintain a healthy skepticism while promoting rigorous scientific inquiry in this complex field.

Thursday, October 26, 2023

The Neuroscience of Trust

Paul J. Zak
Harvard Business Review
Originally posted January-February 2017

Here is an excerpt:

The Return on Trust

After identifying and measuring the managerial behaviors that sustain trust in organizations, my team and I tested the impact of trust on business performance. We did this in several ways. First, we gathered evidence from a dozen companies that have launched policy changes to raise trust (most were motivated by a slump in their profits or market share). Second, we conducted the field experiments mentioned earlier: In two businesses where trust varies by department, my team gave groups of employees specific tasks, gauged their productivity and innovation in those tasks, and gathered very detailed data—including direct measures of brain activity—showing that trust improves performance. And third, with the help of an independent survey firm, we collected data in February 2016 from a nationally representative sample of 1,095 working adults in the U.S. The findings from all three sources were similar, but I will focus on what we learned from the national data since itʼs generalizable.

By surveying the employees about the extent to which firms practiced the eight behaviors, we were able to calculate the level of trust for each organization. (To avoid priming respondents, we never used the word “trust” in surveys.) The U.S. average for organizational trust was 70% (out of a possible 100%). Fully 47% of respondents worked in organizations where trust was below the average, with one firm scoring an abysmally low 15%. Overall, companies scored lowest on recognizing excellence and sharing information (67% and 68%, respectively). So the data suggests that the average U.S. company could enhance trust by
improving in these two areas—even if it didnʼt improve in the other six.

The effect of trust on self-reported work performance was powerful.  Respondents whose companies were in the top quartile indicated they had 106% more energy and were 76% more engaged at work than respondents whose firms were in the bottom quartile. They also reported being 50% more productive
—which is consistent with our objective measures of productivity from studies we have done with employees at work. Trust had a major impact on employee loyalty as well: Compared with employees at low-trust companies, 50% more of those working at high-trust organizations planned to stay with their employer over the next year, and 88% more said they would recommend their company to family and friends as a place to work.


Here is a summary of the key points from the article:
  • Trust is crucial for social interactions and has implications for economic, political, and healthcare outcomes. There are two main types of trust - emotional trust and cognitive trust.
  • Emotional trust develops early in life through attachments and is more implicit, while cognitive trust relies on reasoning and develops later. Both rely on brain regions involved in reward, emotion regulation, understanding others' mental states, and decision making.
  • Oxytocin and vasopressin play key roles in emotional trust by facilitating social bonding and attachment. Disruptions to these systems are linked to social disorders like autism.
  • The prefrontal cortex, amygdala, and striatum are involved in cognitive trust judgments and updating trustworthiness based on new evidence. Damage to prefrontal regions impairs updating of trustworthiness.
  • Trust engages the brain's reward circuitry. Betrayals of trust activate pain and emotion regulation circuits. Trustworthiness cues engage the mentalizing network for inferring others' intentions.
  • Neuroimaging studies show trust engage brain regions involved in reward, emotion regulation, understanding mental states, and decision making. Oxytocin administration increases trusting behavior.
  • Understanding the neuroscience of trust can inform efforts to build trust in healthcare, economic, political, and other social domains. More research is needed on how trust develops over the lifespan.

Wednesday, October 25, 2023

The moral psychology of Artificial Intelligence

Bonnefon, J., Rahwan, I., & Shariff, A.
(2023, September 22). 

Abstract

Moral psychology was shaped around three categories of agents and patients: humans, other animals, and supernatural beings. Rapid progress in Artificial Intelligence has introduced a fourth category for our moral psychology to deal with: intelligent machines. Machines can perform as moral agents, making decisions that affect the outcomes of human patients, or solving moral dilemmas without human supervi- sion. Machines can be as perceived moral patients, whose outcomes can be affected by human decisions, with important consequences for human-machine cooperation. Machines can be moral proxies, that human agents and patients send as their delegates to a moral interaction, or use as a disguise in these interactions. Here we review the experimental literature on machines as moral agents, moral patients, and moral proxies, with a focus on recent findings and the open questions that they suggest.

Conclusion

We have not addressed every issue at the intersection of AI and moral psychology. Questions about how people perceive AI plagiarism, about how the presence of AI agents can reduce or enhance trust between groups of humans, about how sexbots will alter intimate human relations, are the subjects of active research programs.  Many more yet unasked questions will only be provoked as new AI  abilities  develops. Given the pace of this change, any review paper will only be a snapshot.  Nevertheless, the very recent and rapid emergence of AI-driven technology is colliding with moral intuitions forged by culture and evolution over the span of millennia.  Grounding an imaginative speculation about the possibilities of AI with a thorough understanding of the structure of human moral psychology will help prepare for a world shared with, and complicated by, machines.

Tuesday, October 24, 2023

The Implications of Diverse Human Moral Foundations for Assessing the Ethicality of Artificial Intelligence

Telkamp, J.B., Anderson, M.H. 
J Bus Ethics 178, 961–976 (2022).

Abstract

Organizations are making massive investments in artificial intelligence (AI), and recent demonstrations and achievements highlight the immense potential for AI to improve organizational and human welfare. Yet realizing the potential of AI necessitates a better understanding of the various ethical issues involved with deciding to use AI, training and maintaining it, and allowing it to make decisions that have moral consequences. People want organizations using AI and the AI systems themselves to behave ethically, but ethical behavior means different things to different people, and many ethical dilemmas require trade-offs such that no course of action is universally considered ethical. How should organizations using AI—and the AI itself—process ethical dilemmas where humans disagree on the morally right course of action? Though a variety of ethical AI frameworks have been suggested, these approaches do not adequately address how people make ethical evaluations of AI systems or how to incorporate the fundamental disagreements people have regarding what is and is not ethical behavior. Drawing on moral foundations theory, we theorize that a person will perceive an organization’s use of AI, its data procedures, and the resulting AI decisions as ethical to the extent that those decisions resonate with the person’s moral foundations. Since people hold diverse moral foundations, this highlights the crucial need to consider individual moral differences at multiple levels of AI. We discuss several unresolved issues and suggest potential approaches (such as moral reframing) for thinking about conflicts in moral judgments concerning AI.

The article is paywalled, link is above.

Here are some additional points:
  • The article raises important questions about the ethicality of AI systems. It is clear that there is no single, monolithic standard of morality that can be applied to AI systems. Instead, we need to consider a plurality of moral foundations when evaluating the ethicality of AI systems.
  • The article also highlights the challenges of assessing the ethicality of AI systems. It is difficult to measure the impact of AI systems on human well-being, and there is no single, objective way to determine whether an AI system is ethical. However, the article suggests that a pluralistic approach to ethical evaluation, which takes into account a variety of moral perspectives, is the best way to assess the ethicality of AI systems.
  • The article concludes by calling for more research on the implications of diverse human moral foundations for the ethicality of AI. This is an important area of research, and I hope that more research is conducted in this area in the future.

Monday, October 23, 2023

The Knobe Effect From the Perspective of Normative Orders

Waleszczyński,A.,Obidziński,M. & Rejewska, J.
(2018). Studia Humana,7(4) 9-15. 
https://doi.org/10.2478/sh-2018-0019

Abstract

The characteristic asymmetry in the attribution of intentionality in causing side effects, known as the Knobe effect, is considered to be a stable model of human cognition. This article looks at whether the way of thinking and analysing one scenario may affect the other and whether the mutual relationship between the ways in which both scenarios are analysed may affect the stability of the Knobe effect. The theoretical analyses and empirical studies performed are based on a distinction between moral and non-moral normativity possibly affecting the judgments passed in both scenarios. Therefore, an essential role in judgments about the intentionality of causing a side effect could be played by normative competences responsible for distinguishing between normative orders. 

From the Summary

As to the question asked at the onset of this article, namely, whether the way of thinking about the intentionality of causing a side effect in morally negative situations affects the way of thinking about the intentionality of causing a side effect in morally positive situations, or vice versa, the answer could be as follows. It is very likely that the way of thinking and analysing each of the scenarios depends on the normative order from the perspective of which each particular scenario or sequence of scenarios is considered. At the same time, the results suggest that it is moral normativity that decides the stability of the Knobe effect. Nevertheless, more in-depth empirical and theoretical studies are required in order to analyse the problems discussed in this article more thoroughly. 


My brief explanation:

The Knobe Effect is a phenomenon in experimental philosophy where people are more likely to ascribe intentionality to an action if it has a harmful side effect that the agent could have foreseen, even if the agent did not intend the side effect. This effect is important to psychologists because it sheds light on how people understand intentionality, which is a central concept in psychology.

The Knobe Effect is also important to psychologists because it has implications for our understanding of moral judgment. For example, if people are more likely to blame an agent for a harmful side effect that the agent could have foreseen, even if the agent did not intend the side effect, then this suggests that people may be using moral considerations to inform their judgments of intentionality.

These asymmetrical attributions may be most helpful when working with high conflict couples who interpret harmful messages as intentional, and may minimize helpful and supportive messages (because these are expected).