Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Values. Show all posts
Showing posts with label Values. Show all posts

Tuesday, December 12, 2023

Health Insurers Have Been Breaking State Laws for Years

Maya Miller and Robin Fields
ProPublic.org
Originally published 16, NOV 23

Here is an excerpt:

State insurance departments are responsible for enforcing these laws, but many are ill-equipped to do so, researchers, consumer advocates and even some regulators say. These agencies oversee all types of insurance, including plans covering cars, homes and people’s health. Yet they employed less people last year than they did a decade ago. Their first priority is making sure plans remain solvent; protecting consumers from unlawful denials often takes a backseat.

“They just honestly don’t have the resources to do the type of auditing that we would need,” said Sara McMenamin, an associate professor of public health at the University of California, San Diego, who has been studying the implementation of state mandates.

Agencies often don’t investigate health insurance denials unless policyholders or their families complain. But denials can arrive at the worst moments of people’s lives, when they have little energy to wrangle with bureaucracy. People with plans purchased on HealthCare.gov appealed less than 1% of the time, one study found.

ProPublica surveyed every state’s insurance agency and identified just 45 enforcement actions since 2018 involving denials that have violated coverage mandates. Regulators sometimes treat consumer complaints as one-offs, forcing an insurer to pay for that individual’s treatment without addressing whether a broader group has faced similar wrongful denials.

When regulators have decided to dig deeper, they’ve found that a single complaint is emblematic of a systemic issue impacting thousands of people.

In 2017, a woman complained to Maine’s insurance regulator, saying her carrier, Aetna, broke state law by incorrectly processing claims and overcharging her for services related to the birth of her child. After being contacted by the state, Aetna acknowledged the mistake and issued a refund.


Here's my take:

The article explores the ethical issues surrounding health insurance denials and the violation of state laws. The investigation reveals a pattern of health insurance companies systematically denying coverage for medically necessary treatments, even when such denials directly contravene state laws designed to protect patients. The unethical practices extend to various states, indicating a systemic problem within the industry. Patients are often left in precarious situations, facing financial burdens and health risks due to the denial of essential medical services, raising questions about the industry's commitment to prioritizing patient well-being over profit margins.

The article underscores the need for increased regulatory scrutiny and enforcement to hold health insurance companies accountable for violating state laws and compromising patient care. It highlights the ethical imperative for insurers to prioritize their fundamental responsibility to provide coverage for necessary medical treatments and adhere to the legal frameworks in place to safeguard patient rights. The investigation sheds light on the intersection of profit motives and ethical considerations within the health insurance industry, emphasizing the urgency of addressing these systemic issues to ensure that patients receive the care they require without undue financial or health-related consequences.

Saturday, December 9, 2023

Physicians’ Refusal to Wear Masks to Protect Vulnerable Patients—An Ethical Dilemma for the Medical Profession

Dorfman D, Raz M, Berger Z.
JAMA Health Forum. 2023;4(11):e233780.
doi:10.1001/jamahealthforum.2023.3780

Here is an excerpt:

In theory, the solution to the problem should be simple: patients who wear masks to protect themselves, as recommended by the CDC, can ask the staff and clinicians to wear a mask as well when seeing them, and the clinicians would oblige given the efficacy masks have shown in reducing the spread of respiratory illnesses. However, disabled patients report physicians and other clinical staff having refused to wear a mask when caring for them. Although it is hard to know how prevalent this phenomenon is, what recourse do patients have? How should health care systems approach clinicians and staff who refuse to mask when treating a disabled patient?

Physicians have a history of antagonism to the idea that they themselves might present a health risk to their patients. Famously, when Hungarian physician Ignaz Semmelweis originally proposed handwashing as a measure to reduce purpureal fever, he was met with ridicule and ostracized from the profession.

Physicians were also historically reluctant to adopt new practices to protect not only patients but also physicians themselves against infection in the midst of the AIDS epidemic. In 1985, the CDC presented its guidance on workplace transmission, instructing physicians to provide care, “regardless of whether HCWs [health care workers] or patients are known to be infected with HTLV-III/LAV [human T-lymphotropic virus type III/lymphadenopathy-associated virus] or HBV [hepatitis B virus].” These CDC guidelines offered universal precautions, common-sense, nonstigmatizing, standardized methods to reduce infection. Yet, some physicians bristled at the idea that they need to take simple, universal public health steps to prevent transmission, even in cases in which infectivity is unknown, and instead advocated for a medicalized approach: testing or masking only in cases when a patient is known to be infected. Such an individualized medicalized approach fails to meet the public health needs of the moment.

Patients are the ones who pay the price for physicians’ objections to changes in practices, whether it is handwashing or the denial of care as an unwarranted HIV precaution. Yet today, with the enactment of disability antidiscrimination law, patients are protected, at least on the books.

As we have written elsewhere, federal law supports the right of a disabled individual to request masking as a reasonable disability accommodation in the workplace and at schools.


Here is my summary:

This article explores the ethical dilemma arising from physicians refusing to wear masks, potentially jeopardizing the protection of vulnerable patients. The author delves into the conflict between personal beliefs and professional responsibilities, questioning the ethical implications of such refusals within the medical profession. The analysis emphasizes the importance of prioritizing patient well-being and public health over individual preferences, calling for a balance between personal freedoms and ethical obligations in healthcare settings.

Wednesday, December 6, 2023

People are increasingly following their heart and not the Bible - poll

Ryan Foley
Christian Today
Originally published 2 DEC 23

A new study reveals that less than one-third of Americans believe the Bible should serve as the foundation for determining right and wrong, even as most people express support for traditional moral values.

The fourth installment of the America's Values Study, released by the Cultural Research Center at Arizona Christian University Tuesday, asked respondents for their thoughts on traditional moral values and what they would like to see as "America's foundation for determining right and wrong." The survey is based on responses from 2,275 U.S. adults collected in July 2022.

Overall, when asked to identify what they viewed as the primary determinant of right and wrong in the U.S., a plurality of participants (42%) said: "what you feel in your heart." An additional 29% cited majority rule as their desired method for determining right and wrong, while just 29% expressed a belief that the principles laid out in the Bible should determine the understanding of right and wrong in the U.S. That figure rose to 66% among Spiritually Active, Governance Engaged Conservative Christians.

The only other demographic subgroups where at least a plurality of respondents indicated a desire for the Bible to serve as the determinant of right and wrong in the U.S. were respondents who attend an evangelical church (62%), self-described Republicans (57%), theologically defined born-again Christians (54%), self-identified conservatives (49%), those who are at least 50 years of age (39%), members of all Protestant congregations (39%), self-identified Christians (38%) and those who attend mainline Protestant churches (36%).

By contrast, an outright majority of respondents who do not identify with a particular faith at all (53%), along with half of LGBT respondents (50%), self-described moderates (47%), political independents (47%), Democrats (46%), self-described liberals (46%) and Catholic Church attendees (46%) maintained that "what you feel in your heart" should form the foundation of what Americans view as right and wrong.

Tuesday, December 5, 2023

On Edge: Understanding and Preventing Young Adults’ Mental Health Challenges

Making Caring Common. (2023).


From the Executive Summary

Our recent data suggests that the young adults of Generation Z are experiencing emotional struggles at alarming rates. While the emotional struggles of teens have been in the national spotlight since the pandemic—and this attention has been vital—according to our nationally representative survey, young adults report roughly twice the rates of anxiety and depression as teens. Compared to 18% of teens, a whopping 36% of young adults in our survey reported anxiety; in contrast to 15% of teens, 29% of young adults reported depression. Far too many young adults report that they feel on edge, lonely, unmoored, directionless, and that they worry about financial security. Many are “achieving to achieve” and find little meaning in either school or work. Yet these struggles of young adults have been largely off the public radar.

From the Press Release:

The report identifies a variety of stressors that may be driving young adults’ high rates of anxiety and
depression. The top drivers of young adults’ mental health challenges include:
  • A lack of meaning, purpose, and direction: Nearly 3 in 5 young adults (58%) reported that they lacked “meaning or purpose” in their lives in the previous month. Half of young adults reported that their mental health was negatively influenced by “not knowing what to do with my life.
  • Financial worries and achievement pressure: More than half of young adults reported that financial worries (56%) and achievement pressure (51%) were negatively impacting their mental health.
  • A perception that the world is unraveling: Forty-five percent (45%) of young adults reported that a general "sense that things are falling apart” was impairing their mental health.
  • Relationship deficits: Forty-four percent (44%) of young adults reported a sense of not mattering to others and 34% reported loneliness.
  • Social and political issues: Forty-two percent (42%) reported the negative influence on their mental health of gun violence in schools, 34% cited climate change, and 30% cited worries that our political leaders are incompetent or corrupt.
(cut)

The report also suggests strategies for promoting young adults’ mental health and mitigating their
emotional challenges. These include:
  • Cultivating meaning and purpose in young people, including by engaging them in caring for
  • others and service;
  • Supporting young people in developing gratifying and durable relationships; and
  • Helping young people experience their lives as more than the sum of their achievements.
“We need to do much more to support young adults’ mental health and devote more resources to prevention,” said Kiran Bhai, MCC’s Schools & Parenting Programs Director and a co-author of the
report. “This includes reducing the stressors that young people are facing and helping them develop
the skills they need to thrive.”

Wednesday, November 29, 2023

A justification-suppression model of the expression and experience of prejudice

Crandall, C. S., & Eshleman, A. (2003).
Psychological bulletin, 129(3), 414–446.
https://doi.org/10.1037/0033-2909.129.3.414

Abstract

The authors propose a justification-suppression model (JSM), which characterizes the processes that lead to prejudice expression and the experience of one's own prejudice. They suggest that "genuine" prejudices are not directly expressed but are restrained by beliefs, values, and norms that suppress them. Prejudices are expressed when justifications (e.g., attributions, ideologies, stereotypes) release suppressed prejudices. The same process accounts for which prejudices are accepted into the self-concept The JSM is used to organize the prejudice literature, and many empirical findings are recharacterized as factors affecting suppression or justification, rather than directly affecting genuine prejudice. The authors discuss the implications of the JSM for several topics, including prejudice measurement, ambivalence, and the distinction between prejudice and its expression.


This is an oldie, but goodie!!  Here is my summary:

This article is about prejudice and the factors that influence its expression. The authors propose a justification-suppression model (JSM) to explain how prejudice is expressed. The JSM suggests that people have genuine prejudices that are not directly expressed. Instead, these prejudices are suppressed by people’s beliefs, values, and norms. Prejudice is expressed when justifications (e.g., attributions, ideologies, stereotypes) release suppressed prejudices.

The authors also discuss the implications of the JSM for prejudice measurement, ambivalence, and the distinction between prejudice and its expression.

Here are some key takeaways from the article:
  • Prejudice is a complex phenomenon that is influenced by a variety of factors, including individual beliefs, values, and norms, as well as social and cultural contexts.
  • People may have genuine prejudices that they do not directly express. These prejudices may be suppressed by people’s beliefs, values, and norms.
  • Prejudice is expressed when justifications (e.g., attributions, ideologies, stereotypes) release suppressed prejudices.
  • The JSM can be used to explain a wide range of findings on prejudice, including prejudice measurement, ambivalence, and the distinction between prejudice and its expression.

Thursday, November 23, 2023

How to Maintain Hope in an Age of Catastrophe

Masha Gessen
The Atlantic
Originally posted 12 Nov 23

Gessen interviews psychoanalyst and author Robert Jay Lifton.  Here is an excerpt from the beginning of the article/interview:

Lifton is fascinated by the range and plasticity of the human mind, its ability to contort to the demands of totalitarian control, to find justification for the unimaginable—the Holocaust, war crimes, the atomic bomb—and yet recover, and reconjure hope. In a century when humanity discovered its capacity for mass destruction, Lifton studied the psychology of both the victims and the perpetrators of horror. “We are all survivors of Hiroshima, and, in our imaginations, of future nuclear holocaust,” he wrote at the end of “Death in Life.” How do we live with such knowledge? When does it lead to more atrocities and when does it result in what Lifton called, in a later book, “species-wide agreement”?

Lifton’s big books, though based on rigorous research, were written for popular audiences. He writes, essentially, by lecturing into a Dictaphone, giving even his most ambitious works a distinctive spoken quality. In between his five large studies, Lifton published academic books, papers and essays, and two books of cartoons, “Birds” and “PsychoBirds.” (Every cartoon features two bird heads with dialogue bubbles, such as, “ ‘All of a sudden I had this wonderful feeling: I am me!’ ” “You were wrong.”) Lifton’s impact on the study and treatment of trauma is unparalleled. In a 2020 tribute to Lifton in the Journal of the American Psychoanalytic Association, his former colleague Charles Strozier wrote that a chapter in “Death in Life” on the psychology of survivors “has never been surpassed, only repeated many times and frequently diluted in its power. All those working with survivors of trauma, personal or sociohistorical, must immerse themselves in his work.”


Here is my summary of the article and helpful tips.  Happy (hopeful) Thanksgiving!!

Hope is not blind optimism or wishful thinking, but rather a conscious decision to act in the face of uncertainty and to believe in the possibility of a better future. The article/interview identifies several key strategies for cultivating hope, including:
  • Nurturing a sense of purpose: Having a clear sense of purpose can provide direction and motivation, even in the darkest of times. This purpose can be rooted in personal goals, relationships, or a commitment to a larger cause.
  • Engaging in meaningful action: Taking concrete steps, no matter how small, can help to combat feelings of helplessness and despair. Action can range from individual acts of kindness to participation in collective efforts for social change.
  • Cultivating a sense of community: Connecting with others who share our concerns can provide a sense of belonging and support. Shared experiences and collective action can amplify our efforts and strengthen our resolve.
  • Maintaining a critical perspective: While it is important to hold onto hope, it is also crucial to avoid complacency or denial. We need to recognize the severity of the challenges we face and to remain vigilant in our efforts to address them.
  • Embracing resilience: Hope is not about denying hardship or expecting a quick and easy resolution to our problems. Rather, it is about cultivating the resilience to persevere through difficult times and to believe in the possibility of positive change.

The article concludes by emphasizing the importance of hope as a driving force for positive change. Hope is not a luxury, but a necessity for survival and for building a better future. By nurturing hope, we can empower ourselves and others to confront the challenges we face and to work towards a more just and equitable world.

Thursday, October 5, 2023

Morality beyond the WEIRD: How the nomological network of morality varies across cultures

Atari, M., Haidt, J., et al. (2023).
Journal of Personality and Social Psychology.
Advance online publication.

Abstract

Moral foundations theory has been a generative framework in moral psychology in the last 2 decades. Here, we revisit the theory and develop a new measurement tool, the Moral Foundations Questionnaire–2 (MFQ-2), based on data from 25 populations. We demonstrate empirically that equality and proportionality are distinct moral foundations while retaining the other four existing foundations of care, loyalty, authority, and purity. Three studies were conducted to develop the MFQ-2 and to examine how the nomological network of moral foundations varies across 25 populations. Study 1 (N = 3,360, five populations) specified a refined top-down approach for measurement of moral foundations. Study 2 (N = 3,902, 19 populations) used a variety of methods (e.g., factor analysis, exploratory structural equations model, network psychometrics, alignment measurement equivalence) to provide evidence that the MFQ-2 fares well in terms of reliability and validity across cultural contexts. We also examined population-level, religious, ideological, and gender differences using the new measure. Study 3 (N = 1,410, three populations) provided evidence for convergent validity of the MFQ-2 scores, expanded the nomological network of the six moral foundations, and demonstrated the improved predictive power of the measure compared with the original MFQ. Importantly, our results showed how the nomological network of moral foundations varied across cultural contexts: consistent with a pluralistic view of morality, different foundations were influential in the network of moral foundations depending on cultural context. These studies sharpen the theoretical and methodological resolution of moral foundations theory and provide the field of moral psychology a more accurate instrument for investigating the many ways that moral conflicts and divisions are shaping the modern world.


Here's my summary:

The article examines how the moral foundations theory (MFT) of morality applies to cultures outside of the Western, Educated, Industrialized, Rich, and Democratic (WEIRD) world. MFT proposes that there are six universal moral foundations: care/harm, fairness/cheating, loyalty/betrayal, authority/subversion, sanctity/degradation, and liberty/oppression. However, previous research has shown that the relative importance of these foundations can vary across cultures.

The authors of the article conducted three studies to examine the nomological network of morality (i.e., the relationships between different moral foundations) in 25 populations. They found that the nomological network of morality varied significantly across cultures. For example, in some cultures, the foundation of care was more strongly related to the foundation of fairness, while in other cultures, the foundation of loyalty was more strongly related to the foundation of authority.

The authors argue that these findings suggest that MFT needs to be revised to take into account cultural variation. They propose that the nomological network of morality is shaped by a combination of universal moral principles and local cultural norms. This means that there is no single "correct" way to think about morality, and that what is considered moral in one culture may not be considered moral in another.

The article's findings have important implications for our understanding of morality and for cross-cultural research. They suggest that we need to be careful about making assumptions about the moral beliefs of people from other cultures. We also need to be aware of the ways in which culture can influence our own moral judgments.

Thursday, September 21, 2023

The Myth of the Secret Genius

Brian Klaas
The Garden of Forking Path
Originally posted 30 Nov 22

Here are two excepts: 

A recent research study, involving a collaboration between physicists who model complex systems and an economist, however, has revealed why billionaires are so often mediocre people masquerading as geniuses. Using computer modelling, they developed a fake society in which there is a realistic distribution of talent among competing agents in the simulation. They then applied some pretty simple rules for their model: talent helps, but luck also plays a role.

Then, they tried to see what would happen if they ran and re-ran the simulation over and over.

What did they find? The most talented people in society almost never became extremely rich. As they put it, “the most successful individuals are not the most talented ones and, on the other hand, the most talented individuals are not the most successful ones.”

Why? The answer is simple. If you’ve got a society of, say, 8 billion people, there are literally billions of humans who are in the middle distribution of talent, the largest area of the Bell curve. That means that in a world that is partly defined by random chance, or luck, the odds that someone from the middle levels of talent will end up as the richest person in the society are extremely high.

Look at this first plot, in which the researchers show capital/success (being rich) on the vertical/Y-axis, and talent on the horizontal/X-axis. What’s clear is that society’s richest person is only marginally more talented than average, and there are a lot of people who are extremely talented that are not rich.

Then, they tried to figure out why this was happening. In their simulated world, lucky and unlucky events would affect agents every so often, in a largely random pattern. When they measured the frequency of luck or misfortune for any individual in the simulation, and then plotted it against becoming rich or poor, they found a strong relationship.

(cut)

The authors conclude by stating “Our results highlight the risks of the paradigm that we call “naive meritocracy", which fails to give honors and rewards to the most competent people, because it underestimates the role of randomness among the determinants of success.”

Indeed.


Here is my summary:

The myth of the secret genius: The belief that some people are just born with natural talent and that there is nothing we can do to achieve the same level of success.

The importance of hard work: The vast majority of successful people are not geniuses. They are simply people who have worked hard and persevered in the face of setbacks.

The power of luck: Luck plays a role in everyone's success. Some people are luckier than others, and most people do not factor in luck, as well as other external variables, into their assessment.  This bias is another form of the Fundamental Attribution Error.

The importance of networks: Our networks play a big role in our success. We need to be proactive in building relationships with people who can help us achieve our goals.

Friday, September 8, 2023

He was a top church official who criticized Trump. He says Christianity is in crisis

S. Detrow, G. J. Sanchez, & S. Handel
npr.org
Originally poste 8 Aug 23

Here is an excerpt:

What's the big deal? 

According to Moore, Christianity is in crisis in the United States today.
  • Moore is now the editor-in-chief of the Christianity Today magazine and has written a new book, Losing Our Religion: An Altar Call For Evangelical America, which is his attempt at finding a path forward for the religion he loves.
  • Moore believes part of the problem is that "almost every part of American life is tribalized and factionalized," and that has extended to the church.
  • "I think if we're going to get past the blood and soil sorts of nationalism or all of the other kinds of totalizing cultural identities, it's going to require rethinking what the church is," he told NPR.
  • During his time in office, Trump embraced a Christian nationalist stance — the idea that the U.S. is a Christian country and should enforce those beliefs. In the run-up to the 2024 presidential election, Republican candidates are again vying for the influential evangelical Christian vote, demonstrating its continued influence in politics.
  • In Aug. 2022, church leaders confirmed the Department of Justice was investigating Southern Baptists following a sexual abuse crisis. In a statement, SBC leaders said: "Current leaders across the SBC have demonstrated a firm conviction to address those issues of the past and are implementing measures to ensure they are never repeated in the future."
  • In 2017, the church voted to formally "denounce and repudiate" white nationalism at its annual meeting.

What is he saying? 

Moore spoke to All Things Considered's Scott Detrow about what he thinks the path forward is for evangelicalism in America.

On why he thinks Christianity is in crisis:
It was the result of having multiple pastors tell me, essentially, the same story about quoting the Sermon on the Mount, parenthetically, in their preaching — "turn the other cheek" — [and] to have someone come up after to say, "Where did you get those liberal talking points?" And what was alarming to me is that in most of these scenarios, when the pastor would say, "I'm literally quoting Jesus Christ," the response would not be, "I apologize." The response would be, "Yes, but that doesn't work anymore. That's weak." And when we get to the point where the teachings of Jesus himself are seen as subversive to us, then we're in a crisis.

The information is here. 

Thursday, September 7, 2023

AI Should Be Terrified of Humans

Brian Kateman
Time.com
Originally posted 24 July 23

Here are two excerpts:

Humans have a pretty awful track record for how we treat others, including other humans. All manner of exploitation, slavery, and violence litters human history. And today, billions upon billions of animals are tortured by us in all sorts of obscene ways, while we ignore the plight of others. There’s no quick answer to ending all this suffering. Let’s not wait until we’re in a similar situation with AI, where their exploitation is so entrenched in our society that we don’t know how to undo it. If we take for granted starting right now that maybe, just possibly, some forms of AI are or will be capable of suffering, we can work with the intention to build a world where they don’t have to.

(cut)

Today, many scientists and philosophers are looking at the rise of artificial intelligence from the other end—as a potential risk to humans or even humanity as a whole. Some are raising serious concerns over the encoding of social biases like racism and sexism into computer programs, wittingly or otherwise, which can end up having devastating effects on real human beings caught up in systems like healthcare or law enforcement. Others are thinking earnestly about the risks of a digital-being-uprising and what we need to do to make sure we’re not designing technology that will view humans as an adversary and potentially act against us in one way or another. But more and more thinkers are rightly speaking out about the possibility that future AI should be afraid of us.

“We rationalize unmitigated cruelty toward animals—caging, commodifying, mutilating, and killing them to suit our whims—on the basis of our purportedly superior intellect,” Marina Bolotnikova writes in a recent piece for Vox. “If sentience in AI could ever emerge…I’m doubtful we’d be willing to recognize it, for the same reason that we’ve denied its existence in animals.” Working in animal protection, I’m sadly aware of the various ways humans subjugate and exploit other species. Indeed, it’s not only our impressive reasoning skills, our use of complex language, or our ability to solve difficult problems and introspect that makes us human; it’s also our unparalleled ability to increase non-human suffering. Right now there’s no reason to believe that we aren’t on a path to doing the same thing to AI. Consider that despite our moral progress as a species, we torture more non-humans today than ever before. We do this not because we are sadists, but because even when we know individual animals feel pain, we derive too much profit and pleasure from their exploitation to stop.


Wednesday, September 6, 2023

Could a Large Language Model Be Conscious?

David Chalmers
Boston Review
Originally posted 9 Aug 23

Here are two excerpts:

Consciousness also matters morally. Conscious systems have moral status. If fish are conscious, it matters how we treat them. They’re within the moral circle. If at some point AI systems become conscious, they’ll also be within the moral circle, and it will matter how we treat them. More generally, conscious AI will be a step on the path to human level artificial general intelligence. It will be a major step that we shouldn’t take unreflectively or unknowingly.

This gives rise to a second challenge: Should we create conscious AI? This is a major ethical challenge for the community. The question is important and the answer is far from obvious.

We already face many pressing ethical challenges about large language models. There are issues about fairness, about safety, about truthfulness, about justice, about accountability. If conscious AI is coming somewhere down the line, then that will raise a new group of difficult ethical challenges, with the potential for new forms of injustice added on top of the old ones. One issue is that conscious AI could well lead to new harms toward humans. Another is that it could lead to new harms toward AI systems themselves.

I’m not an ethicist, and I won’t go deeply into the ethical questions here, but I don’t take them lightly. I don’t want the roadmap to conscious AI that I’m laying out here to be seen as a path that we have to go down. The challenges I’m laying out in what follows could equally be seen as a set of red flags. Each challenge we overcome gets us closer to conscious AI, for better or for worse. We need to be aware of what we’re doing and think hard about whether we should do it.

(cut)

Where does the overall case for or against LLM consciousness stand?

Where current LLMs such as the GPT systems are concerned: I think none of the reasons for denying consciousness in these systems is conclusive, but collectively they add up. We can assign some extremely rough numbers for illustrative purposes. On mainstream assumptions, it wouldn’t be unreasonable to hold that there’s at least a one-in-three chance—that is, to have a subjective probability or credence of at least one-third—that biology is required for consciousness. The same goes for the requirements of sensory grounding, self models, recurrent processing, global workspace, and unified agency. If these six factors were independent, it would follow that there’s less than a one-in-ten chance that a system lacking all six, like a current paradigmatic LLM, would be conscious. Of course the factors are not independent, which drives the figure somewhat higher. On the other hand, the figure may be driven lower by other potential requirements X that we have not considered.

Taking all that into account might leave us with confidence somewhere under 10 percent in current LLM consciousness. You shouldn’t take the numbers too seriously (that would be specious precision), but the general moral is that given mainstream assumptions about consciousness, it’s reasonable to have a low credence that current paradigmatic LLMs such as the GPT systems are conscious.


Here are some of the key points from the article:
  1. There is no consensus on what consciousness is, so it is difficult to say definitively whether or not LLMs are conscious.
  2. Some people believe that consciousness requires carbon-based biology, but Chalmers argues that this is a form of biological chauvinism.  I agree with this completely. We can have synthetic forms of consciousness.
  3. Other people believe that LLMs are not conscious because they lack sensory processing or bodily embodiment. Chalmers argues that these objections are not decisive, but they do raise important questions about the nature of consciousness.
  4. Chalmers concludes by suggesting that we should take the possibility of LLM consciousness seriously, but that we should also be cautious about making definitive claims about it.

Friday, September 1, 2023

Building Superintelligence Is Riskier Than Russian Roulette

Tam Hunt & Roman Yampolskiy
nautil.us
Originally posted 2 August 23

Here is an excerpt:

The precautionary principle is a long-standing approach for new technologies and methods that urges positive proof of safety before real-world deployment. Companies like OpenAI have so far released their tools to the public with no requirements at all to establish their safety. The burden of proof should be on companies to show that their AI products are safe—not on public advocates to show that those same products are not safe.

Recursively self-improving AI, the kind many companies are already pursuing, is the most dangerous kind, because it may lead to an intelligence explosion some have called “the singularity,” a point in time beyond which it becomes impossible to predict what might happen because AI becomes god-like in its abilities. That moment could happen in the next year or two, or it could be a decade or more away.

Humans won’t be able to anticipate what a far-smarter entity plans to do or how it will carry out its plans. Such superintelligent machines, in theory, will be able to harness all of the energy available on our planet, then the solar system, then eventually the entire galaxy, and we have no way of knowing what those activities will mean for human well-being or survival.

Can we trust that a god-like AI will have our best interests in mind? Similarly, can we trust that human actors using the coming generations of AI will have the best interests of humanity in mind? With the stakes so incredibly high in developing superintelligent AI, we must have a good answer to these questions—before we go over the precipice.

Because of these existential concerns, more scientists and engineers are now working toward addressing them. For example, the theoretical computer scientist Scott Aaronson recently said that he’s working with OpenAI to develop ways of implementing a kind of watermark on the text that the company’s large language models, like GPT-4, produce, so that people can verify the text’s source. It’s still far too little, and perhaps too late, but it is encouraging to us that a growing number of highly intelligent humans are turning their attention to these issues.

Philosopher Toby Ord argues, in his book The Precipice: Existential Risk and the Future of Humanity, that in our ethical thinking and, in particular, when thinking about existential risks like AI, we must consider not just the welfare of today’s humans but the entirety of our likely future, which could extend for billions or even trillions of years if we play our cards right. So the risks stemming from our AI creations need to be considered not only over the next decade or two, but for every decade stretching forward over vast amounts of time. That’s a much higher bar than ensuring AI safety “only” for a decade or two.

Skeptics of these arguments often suggest that we can simply program AI to be benevolent, and if or when it becomes superintelligent, it will still have to follow its programming. This ignores the ability of superintelligent AI to either reprogram itself or to persuade humans to reprogram it. In the same way that humans have figured out ways to transcend our own “evolutionary programming”—caring about all of humanity rather than just our family or tribe, for example—AI will very likely be able to find countless ways to transcend any limitations or guardrails we try to build into it early on.


Here is my summary:

The article argues that building superintelligence is a risky endeavor, even more so than playing Russian roulette. Further, there is no way to guarantee that we will be able to control a superintelligent AI, and that even if we could, it is possible that the AI would not share our values. This could lead to the AI harming or even destroying humanity.

The authors propose that we should pause our current efforts to develop superintelligence and instead focus on understanding the risks involved. He argues that we need to develop a better understanding of how to align AI with our values, and that we need to develop safety mechanisms that will prevent AI from harming humanity.  (See Shelley's Frankenstein as a literary example.)

Friday, August 11, 2023

How and why people want to be more moral

Sun, J., Wilt, J. A., Meindl, et al. (2023).
Journal of Personality.
https://doi.org/10.1111/jopy.12812

Abstract

Objective

What types of moral improvements do people wish to make? Do they hope to become more good, or less bad? Do they wish to be more caring? More honest? More loyal? And why exactly do they want to become more moral? Presumably, most people want to improve their morality because this would benefit others, but is this in fact their primary motivation? Here, we begin to investigate these questions.

Method

Across two large, preregistered studies (N = 1818), participants provided open-ended descriptions of one change they could make in order to become more moral; they then reported their beliefs about and motives for this change.

Results

In both studies, people most frequently expressed desires to improve their compassion and more often framed their moral improvement goals in terms of amplifying good behaviors than curbing bad ones. The strongest predictor of moral motivation was the extent to which people believed that making the change would have positive consequences for their own well-being.

Conclusions

Together, these studies provide rich descriptive insights into how ordinary people want to be more moral, and show that they are particularly motivated to do so for their own sake.


My summary:
  • People most frequently expressed desires to improve their compassion. This suggests that people are motivated to become more moral in order to be more caring and helpful to others.
  • People more often framed their moral improvement goals in terms of amplifying good behaviors than curbing bad ones. This suggests that people are motivated to become more moral by doing more good things, rather than by simply avoiding doing bad things.
  • The strongest predictor of moral motivation was the extent to which people believed that making the change would have positive consequences for their own well-being. This suggests that people are motivated to become more moral for their own sake, as well as for the sake of others.

Friday, July 21, 2023

Belief in Five Spiritual Entities Edges Down to New Lows

Megan Brenan
news.gallup.com
Originally posted 20 July 23

The percentages of Americans who believe in each of five religious entities -- God, angels, heaven, hell and the devil -- have edged downward by three to five percentage points since 2016. Still, majorities believe in each, ranging from a high of 74% believing in God to lows of 59% for hell and 58% for the devil. About two-thirds each believe in angels (69%) and heaven (67%).

Gallup has used this framework to measure belief in these spiritual entities five times since 2001, and the May 1-24, 2023, poll finds that each is at its lowest point. Compared with 2001, belief in God and heaven is down the most (16 points each), while belief in hell has fallen 12 points, and the devil and angels are down 10 points each.

This question asks respondents whether they believe in each concept or if they are unsure, and from 13% to 15% currently say they are not sure. At the same time, nearly three in 10 U.S. adults do not believe in the devil or hell, while almost two in 10 do not believe in angels and heaven, and 12% say they do not believe in God.

As the percentage of believers has dropped over the past two decades, the corresponding increases have occurred mostly in nonbelief, with much smaller increases in uncertainty. This is true for all but belief in God, which has seen nearly equal increases in uncertainty and nonbelief.

In the current poll, about half of Americans, 51%, believe in all five spiritual entities, while 11% do not believe in any of them. Another 7% are not sure about all of them, while the rest (31%) believe in some and not others.

Gallup periodically measures Americans’ belief in God with different question wordings, producing slightly different results. While the majority of U.S. adults say they believe in God regardless of the question wording, when not offered the option to say they are unsure, significantly more (81% in a survey conducted last year) said they believe in God.



My take: Despite the decline in belief, majorities of Americans still believe in each of the five spiritual entities. This suggests that religion remains an important part of American culture, even as the country becomes more secularized.

Saturday, July 8, 2023

Microsoft Scraps Entire Ethical AI Team Amid AI Boom

Lauren Leffer
gizmodo.com
Updated on March 14, 2023
Still relevant

Microsoft is currently in the process of shoehorning text-generating artificial intelligence into every single product that it can. And starting this month, the company will be continuing on its AI rampage without a team dedicated to internally ensuring those AI features meet Microsoft’s ethical standards, according to a Monday night report from Platformer.

Microsoft has scrapped its whole Ethics and Society team within the company’s AI sector, as part of ongoing layoffs set to impact 10,000 total employees, per Platformer. The company maintains its Office of Responsible AI, which creates the broad, Microsoft-wide principles to govern corporate AI decision making. But the ethics and society taskforce, which bridged the gap between policy and products, is reportedly no more.

Gizmodo reached out to Microsoft to confirm the news. In response, a company spokesperson sent the following statement:
Microsoft remains committed to developing and designing AI products and experiences safely and responsibly. As the technology has evolved and strengthened, so has our investment, which at times has meant adjusting team structures to be more effective. For example, over the past six years we have increased the number of people within our product teams who are dedicated to ensuring we adhere to our AI principles. We have also increased the scale and scope of our Office of Responsible AI, which provides cross-company support for things like reviewing sensitive use cases and advocating for policies that protect customers.

To Platformer, the company reportedly previously shared this slightly different version of the same statement:

Microsoft is committed to developing AI products and experiences safely and responsibly...Over the past six years we have increased the number of people across our product teams within the Office of Responsible AI who, along with all of us at Microsoft, are accountable for ensuring we put our AI principles into practice...We appreciate the trailblazing work the ethics and society team did to help us on our ongoing responsible AI journey.

Note that, in this older version, Microsoft does inadvertently confirm that the ethics and society team is no more. The company also previously specified staffing increases were in the Office of Responsible AI vs people generally “dedicated to ensuring we adhere to our AI principles.”

Yet, despite Microsoft’s reassurances, former employees told Platformer that the Ethics and Society team played a key role translating big ideas from the responsibility office into actionable changes at the product development level.

The info is here.

Friday, June 30, 2023

The psychology of zero-sum beliefs

Davidai, S., Tepper, S.J. 
Nat Rev Psychol (2023). 

Abstract

People often hold zero-sum beliefs (subjective beliefs that, independent of the actual distribution of resources, one party’s gains are inevitably accrued at other parties’ expense) about interpersonal, intergroup and international relations. In this Review, we synthesize social, cognitive, evolutionary and organizational psychology research on zero-sum beliefs. In doing so, we examine when, why and how such beliefs emerge and what their consequences are for individuals, groups and society.  Although zero-sum beliefs have been mostly conceptualized as an individual difference and a generalized mindset, their emergence and expression are sensitive to cognitive, motivational and contextual forces. Specifically, we identify three broad psychological channels that elicit zero-sum beliefs: intrapersonal and situational forces that elicit threat, generate real or imagined resource scarcity, and inhibit deliberation. This systematic study of zero-sum beliefs advances our understanding of how these beliefs arise, how they influence people’s behaviour and, we hope, how they can be mitigated.

From the Summary and Future Directions section

We have suggested that zero-sum beliefs are influenced by threat, a sense of resource scarcity and lack of deliberation. Although each of these three channels can separately lead to zero-sum beliefs, simultaneously activating more than one channel might be especially potent. For instance, focusing on losses (versus gains) is both threatening and heightens a sense of resource scarcity. Consequently, focusing on losses might be especially likely to foster zero-sum beliefs. Similarly, insufficient deliberation on the long-term and dynamic effects of international trade might foster a view of domestic currency as scarce, prompting the belief that trade is zero-sum. Thus, any factor that simultaneously affects the threat that people experience, their perceptions of resource scarcity, and their level of deliberation is more likely to result in zero-sum beliefs, and attenuating zero-sum beliefs requires an exploration of all the different factors that lead to these experiences in the first place. For instance, increasing deliberation reduces zero-sum beliefs about negotiations by increasing people’s accountability, perspective taking or consideration of mutually beneficial issues. Future research could manipulate deliberation in other contexts to examine its causal effect on zero-sum beliefs. Indeed, because people express more moderate beliefs after deliberating policy details, prompting participants to deliberate about social issues (for example, asking them to explain the process by which one group’s outcomes influence another group’s outcomes) might reduce zero-sum beliefs. More generally, research could examine long-term and scalable solutions for reducing zero-sum beliefs, focusing on interventions that simultaneously reduce threat, mitigate views of resource scarcity and increase deliberation.  For instance, as formal training in economics is associated with lower zero-sum beliefs, researchers could examine whether teaching people basic economic principles reduces zero-sum beliefs across various domains. Similarly, because higher socioeconomic status is negatively associated with zero-sum beliefs, creating a sense of abundance might counter the belief that life is zero-sum.

Thursday, June 29, 2023

Fairytales have always reflected the morals of the age. It’s not a sin to rewrite them

Martha Gill
The Guardian
Originally posted 4 June 23

Here are two excerpts:

General outrage greeted “woke” updates to Roald Dahl books this year, and still periodically erupts over Disney remakes, most recently a forthcoming film with a Latina actress as Snow White, and a new Peter Pan & Wendy with “lost girls”. The argument is that too much fashionable refurbishment tends to ruin a magical kingdom, and that cult classics could do with the sort of Grade I listing applied to heritage buildings. If you want to tell new stories, fine – but why not start from scratch?

But this point of view misses something, which is that updating classics is itself an ancient part of literary culture; in fact, it is a tradition, part of our heritage too. While the larger portion of the literary canon is carefully preserved, a slice of it has always been more flexible, to be retold and reshaped as times change.

Fairytales fit within this latter custom: they have been updated, periodically, for many hundreds of years. Cult figures such as Dracula, Frankenstein and Sherlock Holmes fit there too, as do superheroes: each generation, you might say, gets the heroes it deserves. And so does Bond. Modernity is both a villain and a hero within the Bond franchise: the dramatic tension between James – a young cosmopolitan “dinosaur” – and the passing of time has always been part of the fun.

This tradition has a richness to it: it is a historical record of sorts. Look at the progress of the fairy story through the ages and you get a twisty tale of dubious progress, a moral journey through the woods. You could say fairytales have always been politically correct – that is, tweaked to reflect whatever morals a given cohort of parents most wanted to teach their children.

(cut)

The idea that we are pasting over history – censoring important artefacts – is wrongheaded too. It is not as if old films or books have been burned, wiped from the internet or removed from libraries. With today’s propensity for writing things down, common since the 1500s, there is no reason to fear losing the “original” stories.

As for the suggestion that minority groups should make their own stories instead – this is a sly form of exclusion. Ancient universities and gentlemen’s clubs once made similar arguments; why couldn’t exiled individuals simply set up their own versions? It is not so easy. Old stories weave themselves deep into the tapestry of a nation; newer ones will necessarily be confined to the margins.


My take: Updating classic stories can be beneficial and even necessary to promote inclusion, diversity, equity, and fairness. By not updating these stories, we risk perpetuating harmful stereotypes and narratives that reinforce the dominant culture. When we update classic stories, we can create new possibilities for representation and understanding that can help to build a more just and equitable world.  Dominant cultures need to cede power to promote more unity in a multicultural nation.

Saturday, June 24, 2023

The Darwinian Argument for Worrying About AI

Dan Hendrycks
Time.com
Originally posted 31 May 23

Here is an excerpt:

In the biological realm, evolution is a slow process. For humans, it takes nine months to create the next generation and around 20 years of schooling and parenting to produce fully functional adults. But scientists have observed meaningful evolutionary changes in species with rapid reproduction rates, like fruit flies, in fewer than 10 generations. Unconstrained by biology, AIs could adapt—and therefore evolve—even faster than fruit flies do.

There are three reasons this should worry us. The first is that selection effects make AIs difficult to control. Whereas AI researchers once spoke of “designing” AIs, they now speak of “steering” them. And even our ability to steer is slipping out of our grasp as we let AIs teach themselves and increasingly act in ways that even their creators do not fully understand. In advanced artificial neural networks, we understand the inputs that go into the system, but the output emerges from a “black box” with a decision-making process largely indecipherable to humans.

Second, evolution tends to produce selfish behavior. Amoral competition among AIs may select for undesirable traits. AIs that successfully gain influence and provide economic value will predominate, replacing AIs that act in a more narrow and constrained manner, even if this comes at the cost of lowering guardrails and safety measures. As an example, most businesses follow laws, but in situations where stealing trade secrets or deceiving regulators is highly lucrative and difficult to detect, a business that engages in such selfish behavior will most likely outperform its more principled competitors.

Selfishness doesn’t require malice or even sentience. When an AI automates a task and leaves a human jobless, this is selfish behavior without any intent. If competitive pressures continue to drive AI development, we shouldn’t be surprised if they act selfishly too.

The third reason is that evolutionary pressure will likely ingrain AIs with behaviors that promote self-preservation. Skeptics of AI risks often ask, “Couldn’t we just turn the AI off?” There are a variety of practical challenges here. The AI could be under the control of a different nation or a bad actor. Or AIs could be integrated into vital infrastructure, like power grids or the internet. When embedded into these critical systems, the cost of disabling them may prove too high for us to accept since we would become dependent on them. AIs could become embedded in our world in ways that we can’t easily reverse. But natural selection poses a more fundamental barrier: we will select against AIs that are easy to turn off, and we will come to depend on AIs that we are less likely to turn off.

These strong economic and strategic pressures to adopt the systems that are most effective mean that humans are incentivized to cede more and more power to AI systems that cannot be reliably controlled, putting us on a pathway toward being supplanted as the earth’s dominant species. There are no easy, surefire solutions to our predicament.

Sunday, June 18, 2023

Gender-Affirming Care for Trans Youth Is Neither New nor Experimental: A Timeline and Compilation of Studies

Julia Serano
Medium.com
Originally posted 16 May 23

Trans and gender-diverse people are a pancultural and transhistorical phenomenon. It is widely understood that we, like LGBTQ+ people more generally, arise due to natural variation rather than the result of pathology, modernity, or the latest conspiracy theory.

Gender-affirming healthcare has a long history. The first trans-related surgeries were carried out in the 1910s–1930s (Meyerowitz, 2002, pp. 16–21). While some doctors were supportive early on, most were wary. Throughout the mid-twentieth century, these skeptical doctors subjected trans people to all sorts of alternate treatments — from perpetual psychoanalysis, to aversion and electroshock therapies, to administering assigned-sex-consistent hormones (e.g., testosterone for trans female/feminine people), and so on — but none of them worked. The only treatment that reliably allowed trans people to live happy and healthy lives was allowing them to transition. While doctors were initially worried that many would eventually come to regret that decision, study after study has shown that gender-affirming care has a far lower regret rate (typically around 1 or 2 percent) than virtually any other medical procedure. Given all this, plus the fact that there is no test for being trans (medical, psychological, or otherwise), around the turn of the century, doctors began moving away from strict gatekeeping and toward an informed consent model for trans adults to attain gender-affirming care.

Trans children have always existed — indeed most trans adults can tell you about their trans childhoods. During the twentieth century, while some trans kids did socially transition (Gill-Peterson, 2018), most had their gender identities disaffirmed, either by parents who disbelieved them or by doctors who subjected them to “gender reparative” or “conversion” therapies. The rationale behind the latter was a belief at that time that gender identity was flexible and subject to change during early childhood, but we now know that this is not true (see e.g., Diamond & Sigmundson, 1997; Reiner & Gearhart, 2004). Over the years, it became clear that these conversion efforts were not only ineffective, but they caused real harm — this is why most health professional organizations oppose them today.

Given the harm caused by gender-disaffirming approaches, around the turn of the century, doctors and gender clinics began moving toward what has come to be known as the gender affirmative model — here’s how I briefly described this approach in my 2016 essay Detransition, Desistance, and Disinformation: A Guide for Understanding Transgender Children Debates:

Rather than being shamed by their families and coerced into gender conformity, these children are given the space to explore their genders. If they consistently, persistently, and insistently identify as a gender other than the one they were assigned at birth, then their identity is respected, and they are given the opportunity to live as a member of that gender. If they remain happy in their identified gender, then they may later be placed on puberty blockers to stave off unwanted bodily changes until they are old enough (often at age sixteen) to make an informed decision about whether or not to hormonally transition. If they change their minds at any point along the way, then they are free to make the appropriate life changes and/or seek out other identities.

Saturday, June 17, 2023

Debt Collectors Want To Use AI Chatbots To Hustle People For Money

Corin Faife
vice.com
Originally posted 18 MAY 23

Here are two excerpts:

The prospect of automated AI systems making phone calls to distressed people adds another dystopian element to an industry that has long targeted poor and marginalized people. Debt collection and enforcement is far more likely to occur in Black communities than white ones, and research has shown that predatory debt and interest rates exacerbate poverty by keeping people trapped in a never-ending cycle. 

In recent years, borrowers in the US have been piling on debt. In the fourth quarter of 2022, household debt rose to a record $16.9 trillion according to the New York Federal Reserve, accompanied by an increase in delinquency rates on larger debt obligations like mortgages and auto loans. Outstanding credit card balances are at record levels, too. The pandemic generated a huge boom in online spending, and besides traditional credit cards, younger spenders were also hooked by fintech startups pushing new finance products, like the extremely popular “buy now, pay later” model of Klarna, Sezzle, Quadpay and the like.

So debt is mounting, and with interest rates up, more and more people are missing payments. That means more outstanding debts being passed on to collection, giving the industry a chance to sprinkle some AI onto the age-old process of prodding, coaxing, and pressuring people to pay up.

For an insight into how this works, we need look no further than the sales copy of companies that make debt collection software. Here, products are described in a mix of generic corp-speak and dystopian portent: SmartAction, another conversational AI product like Skit, has a debt collection offering that claims to help with “alleviating the negative feelings customers might experience with a human during an uncomfortable process”—because they’ll surely be more comfortable trying to negotiate payments with a robot instead. 

(cut)

“Striking the right balance between assertiveness and empathy is a significant challenge in debt collection,” the company writes in the blog post, which claims GPT-4 has the ability to be “firm and compassionate” with customers.

When algorithmic, dynamically optimized systems are applied to sensitive areas like credit and finance, there’s a real possibility that bias is being unknowingly introduced. A McKinsey report into digital collections strategies plainly suggests that AI can be used to identify and segment customers by risk profile—i.e. credit score plus whatever other data points the lender can factor in—and fine-tune contact techniques accordingly.