Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Sunday, October 29, 2023

We Can't Compete With AI Girlfriends

Freya India
Medium.com
Originally published 14 September 23

Here isn an excerpt:

Of course most people are talking about what this means for men, given they make up the vast majority of users. Many worry about a worsening loneliness crisis, a further decline in sex rates, and ultimately the emergence of “a new generation of incels” who depend on and even verbally abuse their virtual girlfriends. Which is all very concerning. But I wonder, if AI girlfriends really do become as pervasive as online porn, what this will mean for girls and young women? Who feel they need to compete with this?

Most obvious to me is the ramping up of already unrealistic beauty standards. I know conservatives often get frustrated with feminists calling everything unattainable, and I agree they can go too far — but still, it’s hard to deny that the pressure to look perfect today is unlike anything we’ve ever seen before. And I don’t think that’s necessarily pressure from men but I do very much think it’s pressure from a network of profit-driven industries that take what men like and mangle it into an impossible ideal. Until the pressure isn’t just to be pretty but filtered, edited and surgically enhanced to perfection. Until the most lusted after women in our culture look like virtual avatars. And until even the most beautiful among us start to be seen as average.

Now add to all that a world of fully customisable AI girlfriends, each with flawless avatar faces and cartoonish body proportions. Eva AI’s Dream Girl Builder, for example, allows users to personalise every feature of their virtual girlfriend, from face style to butt size. Which could clearly be unhealthy for men who already have warped expectations. But it’s also unhealthy for a generation of girls already hating how they look, suffering with facial and body dysmorphia, and seeking cosmetic surgery in record numbers. Already many girls feel as if they are in constant competition with hyper-sexualised Instagram influencers and infinitely accessible porn stars. Now the next generation will grow up not just with all that but knowing the boys they like can build and sext their ideal woman, and feeling as if they must constantly modify themselves to compete. I find that tragic.


Summary:

The article discusses the growing trend of AI girlfriends and the potential dangers associated with their proliferation. It mentions that various startups are creating romantic chatbots capable of explicit conversations and sexual content, with millions of users downloading such apps. While much of the concern focuses on the impact on men, the article also highlights the negative consequences this trend may have on women, particularly in terms of unrealistic beauty standards and emotional expectations. The author expresses concerns about young girls feeling pressured to compete with AI girlfriends and the potential harm to self-esteem and body image. The article raises questions about the impact of AI girlfriends on real relationships and emotional intimacy, particularly among younger generations. It concludes with a glimmer of hope that people may eventually reject the artificial in favor of authentic human interactions.

The article raises valid concerns about the proliferation of AI girlfriends and their potential societal impacts. It is indeed troubling to think about the unrealistic beauty and emotional standards that these apps may reinforce, especially among young girls and women. The pressure to conform to these virtual ideals can undoubtedly have damaging effects on self-esteem and mental well-being.

The article also highlights concerns about the potential substitution of real emotional intimacy with AI companions, particularly among a generation that is already grappling with social anxieties and less real-world human interaction. This raises important questions about the long-term consequences of such technologies on relationships and societal dynamics.

However, the article's glimmer of optimism suggests that people may eventually realize the value of authentic, imperfect human interactions. This point is essential, as it underscores the potential for a societal shift away from excessive reliance on AI and towards more genuine connections.

In conclusion, while AI girlfriends may offer convenience and instant gratification, they also pose significant risks to societal norms and emotional well-being. It is crucial for individuals and society as a whole to remain mindful of these potential consequences and prioritize real human connections and authenticity.

Saturday, October 28, 2023

Meaning from movement and stillness: Signatures of coordination dynamics reveal infant agency

Sloan, A. T., Jones, N. A., et al. (2023).
PNAS, 120 (39) e2306732120

Abstract

How do human beings make sense of their relation to the world and realize their ability to effect change? Applying modern concepts and methods of coordination dynamics, we demonstrate that patterns of movement and coordination in 3 to 4-mo-olds may be used to identify states and behavioral phenotypes of emergent agency. By means of a complete coordinative analysis of baby and mobile motion and their interaction, we show that the emergence of agency can take the form of a punctuated self-organizing process, with meaning found both in movement and stillness.

Significance

Revamping one of the earliest paradigms for the investigation of infant learning, and moving beyond reinforcement accounts, we show that the emergence of agency in infants can take the form of a bifurcation or phase transition in a dynamical system that spans the baby, the brain, and the environment. Individual infants navigate functional coupling with the world in different ways, suggesting that behavioral phenotypes of agentive discovery exist—and dynamics provides a means to identify them. This phenotyping method may be useful for identifying babies at risk.

Here is my take:

Importantly, researchers found that the emergence of agency can take the form of a punctuated self-organizing process, with meaning found both in movement and stillness.

The findings of this study suggest that infants are not simply passive observers of the world around them, but rather active participants in their own learning and development. The researchers believe that their work could have implications for the early identification of infants at risk for developmental delays.

Here are some of the key takeaways from the study:
  • Infants learn to make sense of their relation to the world through their movement and interaction with their environment.
  • The emergence of agency is a punctuated, self-organizing process that occurs in both movement and stillness.
  • Individual infants navigate functional coupling with the world in different ways, suggesting that behavioral phenotypes of agentive discovery exist.
  • Dynamics provides a means to identify behavioral phenotypes of agentive discovery, which may be useful for identifying babies at risk.
  • This study is a significant contribution to our understanding of how infants learn and develop. It provides new insights into the role of movement and stillness in the emergence of agency and consciousness. The findings of this study have the potential to improve our ability to identify and support infants at risk for developmental delays.

Friday, October 27, 2023

Theory of consciousness branded 'pseudoscience' by neuroscientists

Clare Wilson
New Scientist
Originally posted 19 Sept 23

Consciousness is one of science’s deepest mysteries; it is considered so difficult to explain how physical entities like brain cells produce subjective sensory experiences, such as the sensation of seeing the colour red, that this is sometimes called “the hard problem” of science.

While the question has long been investigated by studying the brain, IIT came from considering the mathematical structure of information-processing networks and could also apply to animals or artificial intelligence.

It says that a network or system has a higher level of consciousness if it is more densely interconnected, such that the interactions between its connection points or nodes yield more information than if it is reduced to its component parts.

IIT predicts that it is theoretically possible to calculate a value for the level of consciousness, termed phi, of any network with known structure and functioning. But as the number of nodes within a network grows, the sums involved get exponentially bigger, meaning that it is practically impossible to calculate phi for the human brain – or indeed any information-processing network with more than about 10 nodes.

(cut)

Giulio Tononi at the University of Wisconsin-Madison, who first developed IIT and took part in the recent testing, did not respond to New Scientist’s requests for comment. But Johannes Fahrenfort at VU Amsterdam in the Netherlands, who was not involved in the recent study, says the letter went too far. “There isn’t a lot of empirical support for IIT. But that doesn’t warrant calling it pseudoscience.”

Complicating matters, there is no single definition of pseudoscience. But ITT is not in the same league as astrology or homeopathy, says James Ladyman at the University of Bristol in the UK. “It looks like a serious attempt to understand consciousness. It doesn’t make it a theory pseudoscience just because some people are making exaggerated claims.”


Summary:

A group of 124 neuroscientists, including prominent figures in the field, have criticized the integrated information theory (IIT) of consciousness in an open letter. They argue that recent experimental evidence said to support IIT didn't actually test its core ideas and is practically impossible to perform. IIT suggests that the level of consciousness, called "phi," can be calculated for any network with known structure and functioning, but this becomes impractical for networks with many nodes, like the human brain. Some critics believe that IIT has been overhyped and may have unintended consequences for policies related to consciousness in fetuses and animals. However, not all experts consider IIT pseudoscience, with some seeing it as a serious attempt to understand consciousness.

The debate surrounding the integrated information theory (IIT) of consciousness is a complex one. While it's clear that the recent experimental evidence has faced criticism for not directly testing the core ideas of IIT, it's important to recognize that the study of consciousness is a challenging and ongoing endeavor.

Consciousness is indeed one of science's profound mysteries, often referred to as "the hard problem." IIT, in its attempt to address this problem, has sparked valuable discussions and research. It may not be pseudoscience, but the concerns raised about overhyping its findings are valid. It's crucial for scientific theories to be communicated accurately to avoid misinterpretation and potential policy implications.

Ultimately, the study of consciousness requires a multidisciplinary approach and the consideration of various theories, and it's important to maintain a healthy skepticism while promoting rigorous scientific inquiry in this complex field.

Thursday, October 26, 2023

The Neuroscience of Trust

Paul J. Zak
Harvard Business Review
Originally posted January-February 2017

Here is an excerpt:

The Return on Trust

After identifying and measuring the managerial behaviors that sustain trust in organizations, my team and I tested the impact of trust on business performance. We did this in several ways. First, we gathered evidence from a dozen companies that have launched policy changes to raise trust (most were motivated by a slump in their profits or market share). Second, we conducted the field experiments mentioned earlier: In two businesses where trust varies by department, my team gave groups of employees specific tasks, gauged their productivity and innovation in those tasks, and gathered very detailed data—including direct measures of brain activity—showing that trust improves performance. And third, with the help of an independent survey firm, we collected data in February 2016 from a nationally representative sample of 1,095 working adults in the U.S. The findings from all three sources were similar, but I will focus on what we learned from the national data since itʼs generalizable.

By surveying the employees about the extent to which firms practiced the eight behaviors, we were able to calculate the level of trust for each organization. (To avoid priming respondents, we never used the word “trust” in surveys.) The U.S. average for organizational trust was 70% (out of a possible 100%). Fully 47% of respondents worked in organizations where trust was below the average, with one firm scoring an abysmally low 15%. Overall, companies scored lowest on recognizing excellence and sharing information (67% and 68%, respectively). So the data suggests that the average U.S. company could enhance trust by
improving in these two areas—even if it didnʼt improve in the other six.

The effect of trust on self-reported work performance was powerful.  Respondents whose companies were in the top quartile indicated they had 106% more energy and were 76% more engaged at work than respondents whose firms were in the bottom quartile. They also reported being 50% more productive
—which is consistent with our objective measures of productivity from studies we have done with employees at work. Trust had a major impact on employee loyalty as well: Compared with employees at low-trust companies, 50% more of those working at high-trust organizations planned to stay with their employer over the next year, and 88% more said they would recommend their company to family and friends as a place to work.


Here is a summary of the key points from the article:
  • Trust is crucial for social interactions and has implications for economic, political, and healthcare outcomes. There are two main types of trust - emotional trust and cognitive trust.
  • Emotional trust develops early in life through attachments and is more implicit, while cognitive trust relies on reasoning and develops later. Both rely on brain regions involved in reward, emotion regulation, understanding others' mental states, and decision making.
  • Oxytocin and vasopressin play key roles in emotional trust by facilitating social bonding and attachment. Disruptions to these systems are linked to social disorders like autism.
  • The prefrontal cortex, amygdala, and striatum are involved in cognitive trust judgments and updating trustworthiness based on new evidence. Damage to prefrontal regions impairs updating of trustworthiness.
  • Trust engages the brain's reward circuitry. Betrayals of trust activate pain and emotion regulation circuits. Trustworthiness cues engage the mentalizing network for inferring others' intentions.
  • Neuroimaging studies show trust engage brain regions involved in reward, emotion regulation, understanding mental states, and decision making. Oxytocin administration increases trusting behavior.
  • Understanding the neuroscience of trust can inform efforts to build trust in healthcare, economic, political, and other social domains. More research is needed on how trust develops over the lifespan.

Wednesday, October 25, 2023

The moral psychology of Artificial Intelligence

Bonnefon, J., Rahwan, I., & Shariff, A.
(2023, September 22). 

Abstract

Moral psychology was shaped around three categories of agents and patients: humans, other animals, and supernatural beings. Rapid progress in Artificial Intelligence has introduced a fourth category for our moral psychology to deal with: intelligent machines. Machines can perform as moral agents, making decisions that affect the outcomes of human patients, or solving moral dilemmas without human supervi- sion. Machines can be as perceived moral patients, whose outcomes can be affected by human decisions, with important consequences for human-machine cooperation. Machines can be moral proxies, that human agents and patients send as their delegates to a moral interaction, or use as a disguise in these interactions. Here we review the experimental literature on machines as moral agents, moral patients, and moral proxies, with a focus on recent findings and the open questions that they suggest.

Conclusion

We have not addressed every issue at the intersection of AI and moral psychology. Questions about how people perceive AI plagiarism, about how the presence of AI agents can reduce or enhance trust between groups of humans, about how sexbots will alter intimate human relations, are the subjects of active research programs.  Many more yet unasked questions will only be provoked as new AI  abilities  develops. Given the pace of this change, any review paper will only be a snapshot.  Nevertheless, the very recent and rapid emergence of AI-driven technology is colliding with moral intuitions forged by culture and evolution over the span of millennia.  Grounding an imaginative speculation about the possibilities of AI with a thorough understanding of the structure of human moral psychology will help prepare for a world shared with, and complicated by, machines.

Tuesday, October 24, 2023

The Implications of Diverse Human Moral Foundations for Assessing the Ethicality of Artificial Intelligence

Telkamp, J.B., Anderson, M.H. 
J Bus Ethics 178, 961–976 (2022).

Abstract

Organizations are making massive investments in artificial intelligence (AI), and recent demonstrations and achievements highlight the immense potential for AI to improve organizational and human welfare. Yet realizing the potential of AI necessitates a better understanding of the various ethical issues involved with deciding to use AI, training and maintaining it, and allowing it to make decisions that have moral consequences. People want organizations using AI and the AI systems themselves to behave ethically, but ethical behavior means different things to different people, and many ethical dilemmas require trade-offs such that no course of action is universally considered ethical. How should organizations using AI—and the AI itself—process ethical dilemmas where humans disagree on the morally right course of action? Though a variety of ethical AI frameworks have been suggested, these approaches do not adequately address how people make ethical evaluations of AI systems or how to incorporate the fundamental disagreements people have regarding what is and is not ethical behavior. Drawing on moral foundations theory, we theorize that a person will perceive an organization’s use of AI, its data procedures, and the resulting AI decisions as ethical to the extent that those decisions resonate with the person’s moral foundations. Since people hold diverse moral foundations, this highlights the crucial need to consider individual moral differences at multiple levels of AI. We discuss several unresolved issues and suggest potential approaches (such as moral reframing) for thinking about conflicts in moral judgments concerning AI.

The article is paywalled, link is above.

Here are some additional points:
  • The article raises important questions about the ethicality of AI systems. It is clear that there is no single, monolithic standard of morality that can be applied to AI systems. Instead, we need to consider a plurality of moral foundations when evaluating the ethicality of AI systems.
  • The article also highlights the challenges of assessing the ethicality of AI systems. It is difficult to measure the impact of AI systems on human well-being, and there is no single, objective way to determine whether an AI system is ethical. However, the article suggests that a pluralistic approach to ethical evaluation, which takes into account a variety of moral perspectives, is the best way to assess the ethicality of AI systems.
  • The article concludes by calling for more research on the implications of diverse human moral foundations for the ethicality of AI. This is an important area of research, and I hope that more research is conducted in this area in the future.

Monday, October 23, 2023

The Knobe Effect From the Perspective of Normative Orders

Waleszczyński,A.,Obidziński,M. & Rejewska, J.
(2018). Studia Humana,7(4) 9-15. 
https://doi.org/10.2478/sh-2018-0019

Abstract

The characteristic asymmetry in the attribution of intentionality in causing side effects, known as the Knobe effect, is considered to be a stable model of human cognition. This article looks at whether the way of thinking and analysing one scenario may affect the other and whether the mutual relationship between the ways in which both scenarios are analysed may affect the stability of the Knobe effect. The theoretical analyses and empirical studies performed are based on a distinction between moral and non-moral normativity possibly affecting the judgments passed in both scenarios. Therefore, an essential role in judgments about the intentionality of causing a side effect could be played by normative competences responsible for distinguishing between normative orders. 

From the Summary

As to the question asked at the onset of this article, namely, whether the way of thinking about the intentionality of causing a side effect in morally negative situations affects the way of thinking about the intentionality of causing a side effect in morally positive situations, or vice versa, the answer could be as follows. It is very likely that the way of thinking and analysing each of the scenarios depends on the normative order from the perspective of which each particular scenario or sequence of scenarios is considered. At the same time, the results suggest that it is moral normativity that decides the stability of the Knobe effect. Nevertheless, more in-depth empirical and theoretical studies are required in order to analyse the problems discussed in this article more thoroughly. 


My brief explanation:

The Knobe Effect is a phenomenon in experimental philosophy where people are more likely to ascribe intentionality to an action if it has a harmful side effect that the agent could have foreseen, even if the agent did not intend the side effect. This effect is important to psychologists because it sheds light on how people understand intentionality, which is a central concept in psychology.

The Knobe Effect is also important to psychologists because it has implications for our understanding of moral judgment. For example, if people are more likely to blame an agent for a harmful side effect that the agent could have foreseen, even if the agent did not intend the side effect, then this suggests that people may be using moral considerations to inform their judgments of intentionality.

These asymmetrical attributions may be most helpful when working with high conflict couples who interpret harmful messages as intentional, and may minimize helpful and supportive messages (because these are expected).

Sunday, October 22, 2023

What Is Psychological Safety?

Amy Gallo
Harvard Business Review
Originally posted 15 FEB 23

Here are two excerpts:

Why is psychological safety important?

First, psychological safety leads to team members feeling more engaged and motivated, because they feel that their contributions matter and that they’re able to speak up without fear of retribution. Second, it can lead to better decision-making, as people feel more comfortable voicing their opinions and concerns, which often leads to a more diverse range of perspectives being heard and considered. Third, it can foster a culture of continuous learning and improvement, as team members feel comfortable sharing their mistakes and learning from them. (This is what my boss was doing in the opening story.)

All of these benefits — the impact on a team’s performance, innovation, creativity, resilience, and learning — have been proven in research over the years, most notably in Edmondson’s original research and in a study done at Google. That research, known as Project Aristotle, aimed to understand the factors that impacted team effectiveness across Google. Using over 30 statistical models and hundreds of variables, that project concluded that who was on a team mattered less than how the team worked together. And the most important factor was psychological safety.

Further research has shown the incredible downsides of not having psychological safety, including negative impacts on employee well-being, including stress, burnout, and turnover, as well as on the overall performance of the organization.

(cut)

How do you create psychological safety?

Edmondson is quick to point out that “it’s more magic than science” and it’s important for managers to remember this is “a climate that we co-create, sometimes in mysterious ways.”

Anyone who has worked on a team marked by silence and the inability to speak up, knows how hard it is to reverse that.

A lot of what goes into creating a psychologically safe environment are good management practices — things like establishing clear norms and expectations so there is a sense of predictability and fairness; encouraging open communication and actively listening to employees; making sure team members feel supported; and showing appreciation and humility when people do speak up.

There are a few additional tactics that Edmondson points to as well.


Here are some of my thoughts about psychological safety:
  • It is not the same as comfort. It is okay to feel uncomfortable sometimes, as long as you feel safe to take risks and speak up.
  • It is not about being friends with everyone on your team. It is about creating a respectful and inclusive environment where everyone feels like they can belong.
  • It takes time and effort to build psychological safety. It is not something that happens overnight.

Saturday, October 21, 2023

Should Trackable Pill Technologies Be Used to Facilitate Adherence Among Patients Without Insight?

Tahir Rahman
AMA J Ethics. 2019;21(4):E332-336.
doi: 10.1001/amajethics.2019.332.

Abstract

Aripiprazole tablets with sensor offer a new wireless trackable form of aripiprazole that represents a clear departure from existing drug delivery systems, routes, or formulations. This tracking technology raises concerns about the ethical treatment of patients with psychosis when it could introduce unintended treatment challenges. The use of “trackable” pills and other “smart” drugs or nanodrugs assumes renewed importance given that physicians are responsible for determining patients’ decision-making capacity. Psychiatrists are uniquely positioned in society to advocate on behalf of vulnerable patients with mental health disorders. The case presented here focuses on guidance for capacity determination and informed consent for such nanodrugs.

(cut)

Ethics and Nanodrug Prescribing

Clinicians often struggle with improving treatment adherence in patients with psychosis who lack insight and decision-making capacity, so trackable nanodrugs, even though not proven to improve compliance, are worth considering. At the same time, guidelines are lacking to help clinicians determine which patients are appropriate for trackable nanodrug prescribing. The introduction of an actual tracking device in a patient who suffers from delusions of an imagined tracking device, like Mr A, raises specific ethical concerns. Clinicians have widely accepted the premise that confronting delusions is countertherapeuti The introduction of trackable pill technology could similarly introduce unintended harms. Paul Appelbaum has argued that “with paranoid patients often worried about being monitored or tracked, giving them a pill that does exactly that is an odd approach to treatment. The fear of invasion of privacy might discourage some patients from being compliant with their medical care and thus foster distrust of all psychiatric services. A good therapeutic relationship (often with family, friends, or a guardian involved) is critical to the patient’s engaging in ongoing psychiatric services.

The use of trackable pill technology to improve compliance deserves further scrutiny, as continued reliance on informal, physician determinations of decision-making capacity remain a standard practice. Most patients are not yet accustomed to the idea of ingesting a trackable pill. Therefore, explanation of the intervention must be incorporated into the informed consent process, assuming the patient has decision-making capacity. Since patients may have concerns about the collected data being stored on a device, clinicians might have to answer questions regarding potential breaches of confidentiality. They will also have to contend with clinical implications of acquiring patient treatment compliance data and justifying decisions based on such information. Below is a practical guide to aid clinicians in appropriate use of this technology.