Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Saturday, September 9, 2023

Academics Raise More Than $315,000 for Data Bloggers Sued by Harvard Business School Professor Gino

Neil H. Shah & Claire Yuan
The Crimson
Originally published 1 Sept 23

A group of academics has raised more than $315,000 through a crowdfunding campaign to support the legal expenses of the professors behind data investigation blog Data Colada — who are being sued for defamation by Harvard Business School professor Francesca Gino.

Supporters of the three professors — Uri Simonsohn, Leif D. Nelson, and Joseph P. Simmons — launched the GoFundMe campaign to raise funds for their legal fees after they were named in a $25 million defamation lawsuit filed by Gino last month.

In a series of four blog posts in June, Data Colada gave a detailed account of alleged research misconduct by Gino across four academic papers. Two of the papers were retracted following the allegations by Data Colada, while another had previously been retracted in September 2021 and a fourth is set to be retracted in September 2023.

Organizers wrote on GoFundMe that the fundraiser “hit 2,000 donors and $250K in less than 2 days” and that Simonsohn, Nelson, and Simmons “are deeply moved and grateful for this incredible show of support.”

Simine Vazire, one of the fundraiser’s organizers, said she was “pleasantly surprised” by the reaction throughout academia in support of Data Colada.

“It’s been really nice to see the consensus among the academic community, which is strikingly different than what I see on LinkedIn and the non-academic community,” she said.

Elisabeth M. Bik — a data manipulation expert who also helped organize the fundraiser — credited the outpouring of financial support to solidarity and concern among scientists.

“People are very concerned about this lawsuit and about the potential silencing effect this could have on people who criticize other people’s papers,” Bik said. “I think a lot of people want to support Data Colada for their legal defenses.”

Andrew T. Miltenberg — one of Gino’s attorneys — wrote in an emailed statement that the lawsuit is “not an indictment on Data Colada’s mission.”

Friday, September 8, 2023

He was a top church official who criticized Trump. He says Christianity is in crisis

S. Detrow, G. J. Sanchez, & S. Handel
npr.org
Originally poste 8 Aug 23

Here is an excerpt:

What's the big deal? 

According to Moore, Christianity is in crisis in the United States today.
  • Moore is now the editor-in-chief of the Christianity Today magazine and has written a new book, Losing Our Religion: An Altar Call For Evangelical America, which is his attempt at finding a path forward for the religion he loves.
  • Moore believes part of the problem is that "almost every part of American life is tribalized and factionalized," and that has extended to the church.
  • "I think if we're going to get past the blood and soil sorts of nationalism or all of the other kinds of totalizing cultural identities, it's going to require rethinking what the church is," he told NPR.
  • During his time in office, Trump embraced a Christian nationalist stance — the idea that the U.S. is a Christian country and should enforce those beliefs. In the run-up to the 2024 presidential election, Republican candidates are again vying for the influential evangelical Christian vote, demonstrating its continued influence in politics.
  • In Aug. 2022, church leaders confirmed the Department of Justice was investigating Southern Baptists following a sexual abuse crisis. In a statement, SBC leaders said: "Current leaders across the SBC have demonstrated a firm conviction to address those issues of the past and are implementing measures to ensure they are never repeated in the future."
  • In 2017, the church voted to formally "denounce and repudiate" white nationalism at its annual meeting.

What is he saying? 

Moore spoke to All Things Considered's Scott Detrow about what he thinks the path forward is for evangelicalism in America.

On why he thinks Christianity is in crisis:
It was the result of having multiple pastors tell me, essentially, the same story about quoting the Sermon on the Mount, parenthetically, in their preaching — "turn the other cheek" — [and] to have someone come up after to say, "Where did you get those liberal talking points?" And what was alarming to me is that in most of these scenarios, when the pastor would say, "I'm literally quoting Jesus Christ," the response would not be, "I apologize." The response would be, "Yes, but that doesn't work anymore. That's weak." And when we get to the point where the teachings of Jesus himself are seen as subversive to us, then we're in a crisis.

The information is here. 

Thursday, September 7, 2023

AI Should Be Terrified of Humans

Brian Kateman
Time.com
Originally posted 24 July 23

Here are two excerpts:

Humans have a pretty awful track record for how we treat others, including other humans. All manner of exploitation, slavery, and violence litters human history. And today, billions upon billions of animals are tortured by us in all sorts of obscene ways, while we ignore the plight of others. There’s no quick answer to ending all this suffering. Let’s not wait until we’re in a similar situation with AI, where their exploitation is so entrenched in our society that we don’t know how to undo it. If we take for granted starting right now that maybe, just possibly, some forms of AI are or will be capable of suffering, we can work with the intention to build a world where they don’t have to.

(cut)

Today, many scientists and philosophers are looking at the rise of artificial intelligence from the other end—as a potential risk to humans or even humanity as a whole. Some are raising serious concerns over the encoding of social biases like racism and sexism into computer programs, wittingly or otherwise, which can end up having devastating effects on real human beings caught up in systems like healthcare or law enforcement. Others are thinking earnestly about the risks of a digital-being-uprising and what we need to do to make sure we’re not designing technology that will view humans as an adversary and potentially act against us in one way or another. But more and more thinkers are rightly speaking out about the possibility that future AI should be afraid of us.

“We rationalize unmitigated cruelty toward animals—caging, commodifying, mutilating, and killing them to suit our whims—on the basis of our purportedly superior intellect,” Marina Bolotnikova writes in a recent piece for Vox. “If sentience in AI could ever emerge…I’m doubtful we’d be willing to recognize it, for the same reason that we’ve denied its existence in animals.” Working in animal protection, I’m sadly aware of the various ways humans subjugate and exploit other species. Indeed, it’s not only our impressive reasoning skills, our use of complex language, or our ability to solve difficult problems and introspect that makes us human; it’s also our unparalleled ability to increase non-human suffering. Right now there’s no reason to believe that we aren’t on a path to doing the same thing to AI. Consider that despite our moral progress as a species, we torture more non-humans today than ever before. We do this not because we are sadists, but because even when we know individual animals feel pain, we derive too much profit and pleasure from their exploitation to stop.


Wednesday, September 6, 2023

Could a Large Language Model Be Conscious?

David Chalmers
Boston Review
Originally posted 9 Aug 23

Here are two excerpts:

Consciousness also matters morally. Conscious systems have moral status. If fish are conscious, it matters how we treat them. They’re within the moral circle. If at some point AI systems become conscious, they’ll also be within the moral circle, and it will matter how we treat them. More generally, conscious AI will be a step on the path to human level artificial general intelligence. It will be a major step that we shouldn’t take unreflectively or unknowingly.

This gives rise to a second challenge: Should we create conscious AI? This is a major ethical challenge for the community. The question is important and the answer is far from obvious.

We already face many pressing ethical challenges about large language models. There are issues about fairness, about safety, about truthfulness, about justice, about accountability. If conscious AI is coming somewhere down the line, then that will raise a new group of difficult ethical challenges, with the potential for new forms of injustice added on top of the old ones. One issue is that conscious AI could well lead to new harms toward humans. Another is that it could lead to new harms toward AI systems themselves.

I’m not an ethicist, and I won’t go deeply into the ethical questions here, but I don’t take them lightly. I don’t want the roadmap to conscious AI that I’m laying out here to be seen as a path that we have to go down. The challenges I’m laying out in what follows could equally be seen as a set of red flags. Each challenge we overcome gets us closer to conscious AI, for better or for worse. We need to be aware of what we’re doing and think hard about whether we should do it.

(cut)

Where does the overall case for or against LLM consciousness stand?

Where current LLMs such as the GPT systems are concerned: I think none of the reasons for denying consciousness in these systems is conclusive, but collectively they add up. We can assign some extremely rough numbers for illustrative purposes. On mainstream assumptions, it wouldn’t be unreasonable to hold that there’s at least a one-in-three chance—that is, to have a subjective probability or credence of at least one-third—that biology is required for consciousness. The same goes for the requirements of sensory grounding, self models, recurrent processing, global workspace, and unified agency. If these six factors were independent, it would follow that there’s less than a one-in-ten chance that a system lacking all six, like a current paradigmatic LLM, would be conscious. Of course the factors are not independent, which drives the figure somewhat higher. On the other hand, the figure may be driven lower by other potential requirements X that we have not considered.

Taking all that into account might leave us with confidence somewhere under 10 percent in current LLM consciousness. You shouldn’t take the numbers too seriously (that would be specious precision), but the general moral is that given mainstream assumptions about consciousness, it’s reasonable to have a low credence that current paradigmatic LLMs such as the GPT systems are conscious.


Here are some of the key points from the article:
  1. There is no consensus on what consciousness is, so it is difficult to say definitively whether or not LLMs are conscious.
  2. Some people believe that consciousness requires carbon-based biology, but Chalmers argues that this is a form of biological chauvinism.  I agree with this completely. We can have synthetic forms of consciousness.
  3. Other people believe that LLMs are not conscious because they lack sensory processing or bodily embodiment. Chalmers argues that these objections are not decisive, but they do raise important questions about the nature of consciousness.
  4. Chalmers concludes by suggesting that we should take the possibility of LLM consciousness seriously, but that we should also be cautious about making definitive claims about it.

Tuesday, September 5, 2023

How does marijuana affect the brain?

Heather Stringer
Monitor on Psychology
Vol. 54, No. 5, p. 20

Here is an excerpt:

Mixing marijuana with mental health issues

Psychologists also share a sense of urgency to clarify how cannabis affects people who suffer from preexisting mental health conditions. Many veterans who suffer from PTSD view cannabis as a safe alternative to other drugs to alleviate their symptoms (Wilkinson, S. T., et al., Psychiatric Quarterly, Vol. 87. No. 1, 2016). To investigate whether marijuana does in fact provide relief for PTSD symptoms, Jane Metrik, PhD, a professor of behavioral and social sciences at the Brown University School of Public Health and a core faculty member at the university’s Center for Alcohol and Addiction Studies, and colleagues followed more than 350 veterans for a year. They found that more frequent cannabis use worsened trauma-related intrusion symptoms—such as upsetting memories and nightmares—over time (Psychological Medicine, Vol. 52, No. 3, 2022). A PTSD diagnosis was also strongly linked with cannabis use disorder a year later. “Cannabis may give temporary relief from PTSD because there is a numbing feeling, but this fades and then people want to use again,” Metrik said. “Cannabis seems to worsen PTSD and lead to greater dependence on the drug.”

Metrik, who also works as a psychologist at the Providence VA Medical Center, has also been studying the effects of using cannabis and alcohol at the same time. “We need to understand whether cannabis can act as a substitute for alcohol or if it leads to heavier drinking,” she said. “What should we tell patients who are in treatment for problem drinking but are unwilling to stop using cannabis? Is some mild cannabis use OK? What types of cannabis formulations are helpful or harmful for people who have alcohol use disorder?”

Though there are still many unanswered questions, Metrik has seen cases that suggest adding cannabis to heavy drinking behavior is risky. Sometimes people can successfully quit drinking but are unable to stop using cannabis, which can also intensify depression and lead to cannabis hyperemesis syndrome—repeated and severe bouts of vomiting that can occur in heavy cannabis users, she said. Cannabis withdrawal symptoms such as irritability, anxiety, increased cravings, aggression, and restlessness usually subside after 1 to 2 weeks of abstinence, but insomnia tends to persist longer than the other symptoms, she said.

Cannabis may also interfere with pharmaceutical medications patients are taking to treat mental health issues. Cannabidiol (CBD) can inhibit the liver enzymes that metabolize medications such as antidepressants and antipsychotics, said Ryan Vandrey, PhD, a professor of psychiatry and behavioral sciences at Johns Hopkins University and president of APA’s Division 28 (Psychopharmacology and Substance Abuse). “This could lead to side effects because the medication is in the body longer and at higher concentrations,” he said. In a recent study, he found that a high dose of oral CBD also inhibited the metabolism of THC, so the impairment and the subjective “high” was significantly stronger and lasted for a longer time (JAMA Network Open, Vol. 6, No. 2, 2023). This contradicts the common conception that high levels of CBD reduce the effects of THC, he said. “This interaction could lead to more adverse events, such as people feeling sedated, dizzy, [or] nervous, or experiencing low blood pressure for longer periods of time,” Vandrey said.

The interactions between CBD, THC, and pharmaceutical medications also depend on the dosing and the route of administration (oral, topical, or inhalation). Vandrey is advocating for more accurate labeling to inform the public about the health risks and benefits of different products. “Cannabis is the only drug approved for therapeutic use through legislative measures rather than clinical trials,” he said. “It’s really challenging for patients and medical providers to know what dose and frequency will be effective for a specific condition.”

Monday, September 4, 2023

Amid Uncertainty About Francesca Gino’s Research, the Many Co-Authors Project Could Provide Clarity

Evan Nesterak
Behavioral Scientist
Originally posted 30 Aug 23

Here are two excerpts:

“The scientific literature must be cleansed of everything that is fraudulent, especially if it involves the work of a leading academic,” the committee wrote. “No more time and money must be wasted on replications or meta-analyses of fabricated data. Researchers’ and especially students’ too rosy view of the discipline, caused by such publications, should be corrected.”

Stapel’s modus operandi was creating fictitious datasets or tampering with existing ones that he would then “analyze” himself, or pass along to other scientists, including graduate students, as if they were real.

“When the fraud was first discovered, limiting the harm it caused for the victims was a matter of urgency,” the committee said. “This was particularly the case for Mr. Stapel’s former Ph.D. students and postdoctoral researchers, whose publications were suddenly becoming worthless.”

Why revisit the decade-old case of Stapel now? 

Because its echoes can be heard in the unfolding case of Harvard Business School Professor Francesca Gino as she faces allegations of data fraud, and her coauthors, colleagues, and the broader scientific community figure out how to respond. Listening to these echoes, especially those of the Stapel committee, helps put the Gino situation, and the efforts to remedy it, in greater perspective.

(cut)

“After a comprehensive evaluation that took 18 months from start to completion, the investigation committee—comprising three senior HBS colleagues—determined that research misconduct had occurred,” his email said. “After reviewing their detailed report carefully, I could come to no other conclusion, and I accepted their findings.”

He added: “I ultimately accepted the investigation committee’s recommended sanctions, which included immediately placing Professor Gino on administrative leave and correcting the scientific record.”

While it is unclear how the lawsuit will play out, many scientists have expressed concern about the chilling effects it might have on scientists’ willingness to come forward if they suspect research misconduct. 

“If the data are not fraudulent, you ought to be able to show that. If they are, but the fraud was done by someone else, name the person. Suing individual researchers for tens of millions of dollars is a brazen attempt to silence legitimate scientific criticism,” psychologist Yoel Inbar commented on Gino’s statement on Linkedin. 

It is this sentiment that led 13 behavioral scientists (some of whom have coauthored with Gino) to create a GoFundMe campaign on behalf of Simonsohn, Simmons, and Nelson to help raise money for their legal defense. 

Sunday, September 3, 2023

Why Do Evaluative Judgments Affect Emotion Attributions? The Roles of Judgments About Fittingness and the True Self

Prinzing, M., Knobe, J., & Earp, B. D.
(2022). Cognition, Volume 239

Abstract

Past research has found that the value of a person's activities can affect observers' judgments about whether that person is experiencing certain emotions (e.g., people consider morally good agents happier than morally bad agents). One proposed explanation for this effect is that emotion attributions are influenced by judgments about fittingness (whether the emotion is merited). Another hypothesis is that emotion attributions are influenced by judgments about the agent's true self (whether the emotion reflects how the agent feels “deep down”). We tested these hypotheses in six studies. After finding that people think a wide range of emotions can be fitting and reflect a person's true self (Study 1), we tested the predictions of these two hypotheses for attributions of happiness, love, sadness, and hatred. We manipulated the emotions' fittingness (Studies 2a-b and 4) and whether the emotions reflected an agent's true self (Studies 3 and 5), measuring emotion attributions as well as fittingness judgments and true self judgments. The fittingness manipulation only impacted emotion attributions in the cases where it also impacted true self judgments, whereas the true self manipulation impacted emotion attribution in all cases, including those where it did not impact fittingness judgments. These results cast serious doubt on the fittingness hypothesis and offer some support for the true self hypothesis, which could be developed further in future work.

From the Discussion section

What might explain these results? As discussed in the introduction, past research has found that people tend to assume that the true self calls one to be good. This could explain  why  fitting  happiness  and  love  are  assumed  to  reflect  a  person’s  true  self. However, there is also evidence that, under certain conditions, people think that morally bad actions and feelings can reflect a person’s true self. In other words, people sometimes override  their  default  assumption that the true self is good.  Perhaps this is  what is happening when people consider unfitting hatred.  If  the target of someone’s hatred is perfectly  kind and wonderful, then it seems unlikely that there would be any sort of external, social pressure to  hate that target person.  Hence, if a person hates them nonetheless, people may override their default assumption that the person’s true self is good  and  conclude that the hatred reflects the person’s true self.  Indeed,  previous research  has  found  that  people  think  hateful,  racist  behavior  is  more  reflective  of  a person’s true self when that person was not raised to be racist (Daigle & Demaree-Cotton, 2022). Hence, it may be that, in line with correspondent inference theory (Jones & Harris, 1967), when a person is not at all encouraged to feel hatred but feels hatred nonetheless, people are more inclined to think that the hatred  reflects the person that they are  deep down inside.


Some notes: 

The fittingness hypothesis states that people's emotion attributions are influenced by their judgments about whether the emotion is merited or "called for" by the circumstances. For example, people might be less likely to believe that a person is happy if they have just done something morally bad.

The true self hypothesis states that people's emotion attributions are influenced by their judgments about whether the emotion reflects how the person feels "deep down." For example, people might be less likely to believe that a person is happy if they have been acting in a way that is contrary to their true values.

The study found that the true self hypothesis was better supported than the fittingness hypothesis. In other words, people's judgments about whether an emotion reflects the person's true self had a stronger impact on their emotion attributions than their judgments about whether the emotion was merited.

The study's findings suggest that people's emotion attributions are not simply based on the objective circumstances of a situation. Instead, they are also influenced by people's beliefs about the person's true self and whether the emotion is consistent with that person's values.

Saturday, September 2, 2023

Do AI girlfriend apps promote unhealthy expectations for human relationships?

Josh Taylor
The Guardian
Originally posted 21 July 23

Here is an excerpt:

When you sign up for the Eva AI app, it prompts you to create the “perfect partner”, giving you options like “hot, funny, bold”, “shy, modest, considerate” or “smart, strict, rational”. It will also ask if you want to opt in to sending explicit messages and photos.

“Creating a perfect partner that you control and meets your every need is really frightening,” said Tara Hunter, the acting CEO for Full Stop Australia, which supports victims of domestic or family violence. “Given what we know already that the drivers of gender-based violence are those ingrained cultural beliefs that men can control women, that is really problematic.”

Dr Belinda Barnet, a senior lecturer in media at Swinburne University, said the apps cater to a need, but, as with much AI, it will depend on what rules guide the system and how it is trained.

“It’s completely unknown what the effects are,” Barnet said. “With respect to relationship apps and AI, you can see that it fits a really profound social need [but] I think we need more regulation, particularly around how these systems are trained.”

Having a relationship with an AI whose functions are set at the whim of a company also has its drawbacks. Replika’s parent company Luka Inc faced a backlash from users earlier this year when the company hastily removed erotic roleplay functions, a move which many of the company’s users found akin to gutting the Rep’s personality.

Users on the subreddit compared the change to the grief felt at the death of a friend. The moderator on the subreddit noted users were feeling “anger, grief, anxiety, despair, depression, [and] sadness” at the news.

The company ultimately restored the erotic roleplay functionality for users who had registered before the policy change date.

Rob Brooks, an academic at the University of New South Wales, noted at the time the episode was a warning for regulators of the real impact of the technology.

“Even if these technologies are not yet as good as the ‘real thing’ of human-to-human relationships, for many people they are better than the alternative – which is nothing,” he said.


My thoughts: Experts worry that these apps could promote unhealthy expectations for human relationships, as users may come to expect their partners to be perfectly compliant and controllable. Additionally, there is concern that these apps could reinforce harmful gender stereotypes and contribute to violence against women.

The potential risks of AI girlfriend apps are still unknown, and more research is needed to understand their impact on human relationships. However, it is important to be aware of the potential risks and potential harm of these apps and to regulate them accordingly.

Friday, September 1, 2023

Building Superintelligence Is Riskier Than Russian Roulette

Tam Hunt & Roman Yampolskiy
nautil.us
Originally posted 2 August 23

Here is an excerpt:

The precautionary principle is a long-standing approach for new technologies and methods that urges positive proof of safety before real-world deployment. Companies like OpenAI have so far released their tools to the public with no requirements at all to establish their safety. The burden of proof should be on companies to show that their AI products are safe—not on public advocates to show that those same products are not safe.

Recursively self-improving AI, the kind many companies are already pursuing, is the most dangerous kind, because it may lead to an intelligence explosion some have called “the singularity,” a point in time beyond which it becomes impossible to predict what might happen because AI becomes god-like in its abilities. That moment could happen in the next year or two, or it could be a decade or more away.

Humans won’t be able to anticipate what a far-smarter entity plans to do or how it will carry out its plans. Such superintelligent machines, in theory, will be able to harness all of the energy available on our planet, then the solar system, then eventually the entire galaxy, and we have no way of knowing what those activities will mean for human well-being or survival.

Can we trust that a god-like AI will have our best interests in mind? Similarly, can we trust that human actors using the coming generations of AI will have the best interests of humanity in mind? With the stakes so incredibly high in developing superintelligent AI, we must have a good answer to these questions—before we go over the precipice.

Because of these existential concerns, more scientists and engineers are now working toward addressing them. For example, the theoretical computer scientist Scott Aaronson recently said that he’s working with OpenAI to develop ways of implementing a kind of watermark on the text that the company’s large language models, like GPT-4, produce, so that people can verify the text’s source. It’s still far too little, and perhaps too late, but it is encouraging to us that a growing number of highly intelligent humans are turning their attention to these issues.

Philosopher Toby Ord argues, in his book The Precipice: Existential Risk and the Future of Humanity, that in our ethical thinking and, in particular, when thinking about existential risks like AI, we must consider not just the welfare of today’s humans but the entirety of our likely future, which could extend for billions or even trillions of years if we play our cards right. So the risks stemming from our AI creations need to be considered not only over the next decade or two, but for every decade stretching forward over vast amounts of time. That’s a much higher bar than ensuring AI safety “only” for a decade or two.

Skeptics of these arguments often suggest that we can simply program AI to be benevolent, and if or when it becomes superintelligent, it will still have to follow its programming. This ignores the ability of superintelligent AI to either reprogram itself or to persuade humans to reprogram it. In the same way that humans have figured out ways to transcend our own “evolutionary programming”—caring about all of humanity rather than just our family or tribe, for example—AI will very likely be able to find countless ways to transcend any limitations or guardrails we try to build into it early on.


Here is my summary:

The article argues that building superintelligence is a risky endeavor, even more so than playing Russian roulette. Further, there is no way to guarantee that we will be able to control a superintelligent AI, and that even if we could, it is possible that the AI would not share our values. This could lead to the AI harming or even destroying humanity.

The authors propose that we should pause our current efforts to develop superintelligence and instead focus on understanding the risks involved. He argues that we need to develop a better understanding of how to align AI with our values, and that we need to develop safety mechanisms that will prevent AI from harming humanity.  (See Shelley's Frankenstein as a literary example.)