Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Values. Show all posts
Showing posts with label Values. Show all posts

Wednesday, November 29, 2023

A justification-suppression model of the expression and experience of prejudice

Crandall, C. S., & Eshleman, A. (2003).
Psychological bulletin, 129(3), 414–446.


The authors propose a justification-suppression model (JSM), which characterizes the processes that lead to prejudice expression and the experience of one's own prejudice. They suggest that "genuine" prejudices are not directly expressed but are restrained by beliefs, values, and norms that suppress them. Prejudices are expressed when justifications (e.g., attributions, ideologies, stereotypes) release suppressed prejudices. The same process accounts for which prejudices are accepted into the self-concept The JSM is used to organize the prejudice literature, and many empirical findings are recharacterized as factors affecting suppression or justification, rather than directly affecting genuine prejudice. The authors discuss the implications of the JSM for several topics, including prejudice measurement, ambivalence, and the distinction between prejudice and its expression.

This is an oldie, but goodie!!  Here is my summary:

This article is about prejudice and the factors that influence its expression. The authors propose a justification-suppression model (JSM) to explain how prejudice is expressed. The JSM suggests that people have genuine prejudices that are not directly expressed. Instead, these prejudices are suppressed by people’s beliefs, values, and norms. Prejudice is expressed when justifications (e.g., attributions, ideologies, stereotypes) release suppressed prejudices.

The authors also discuss the implications of the JSM for prejudice measurement, ambivalence, and the distinction between prejudice and its expression.

Here are some key takeaways from the article:
  • Prejudice is a complex phenomenon that is influenced by a variety of factors, including individual beliefs, values, and norms, as well as social and cultural contexts.
  • People may have genuine prejudices that they do not directly express. These prejudices may be suppressed by people’s beliefs, values, and norms.
  • Prejudice is expressed when justifications (e.g., attributions, ideologies, stereotypes) release suppressed prejudices.
  • The JSM can be used to explain a wide range of findings on prejudice, including prejudice measurement, ambivalence, and the distinction between prejudice and its expression.

Thursday, November 23, 2023

How to Maintain Hope in an Age of Catastrophe

Masha Gessen
The Atlantic
Originally posted 12 Nov 23

Gessen interviews psychoanalyst and author Robert Jay Lifton.  Here is an excerpt from the beginning of the article/interview:

Lifton is fascinated by the range and plasticity of the human mind, its ability to contort to the demands of totalitarian control, to find justification for the unimaginable—the Holocaust, war crimes, the atomic bomb—and yet recover, and reconjure hope. In a century when humanity discovered its capacity for mass destruction, Lifton studied the psychology of both the victims and the perpetrators of horror. “We are all survivors of Hiroshima, and, in our imaginations, of future nuclear holocaust,” he wrote at the end of “Death in Life.” How do we live with such knowledge? When does it lead to more atrocities and when does it result in what Lifton called, in a later book, “species-wide agreement”?

Lifton’s big books, though based on rigorous research, were written for popular audiences. He writes, essentially, by lecturing into a Dictaphone, giving even his most ambitious works a distinctive spoken quality. In between his five large studies, Lifton published academic books, papers and essays, and two books of cartoons, “Birds” and “PsychoBirds.” (Every cartoon features two bird heads with dialogue bubbles, such as, “ ‘All of a sudden I had this wonderful feeling: I am me!’ ” “You were wrong.”) Lifton’s impact on the study and treatment of trauma is unparalleled. In a 2020 tribute to Lifton in the Journal of the American Psychoanalytic Association, his former colleague Charles Strozier wrote that a chapter in “Death in Life” on the psychology of survivors “has never been surpassed, only repeated many times and frequently diluted in its power. All those working with survivors of trauma, personal or sociohistorical, must immerse themselves in his work.”

Here is my summary of the article and helpful tips.  Happy (hopeful) Thanksgiving!!

Hope is not blind optimism or wishful thinking, but rather a conscious decision to act in the face of uncertainty and to believe in the possibility of a better future. The article/interview identifies several key strategies for cultivating hope, including:
  • Nurturing a sense of purpose: Having a clear sense of purpose can provide direction and motivation, even in the darkest of times. This purpose can be rooted in personal goals, relationships, or a commitment to a larger cause.
  • Engaging in meaningful action: Taking concrete steps, no matter how small, can help to combat feelings of helplessness and despair. Action can range from individual acts of kindness to participation in collective efforts for social change.
  • Cultivating a sense of community: Connecting with others who share our concerns can provide a sense of belonging and support. Shared experiences and collective action can amplify our efforts and strengthen our resolve.
  • Maintaining a critical perspective: While it is important to hold onto hope, it is also crucial to avoid complacency or denial. We need to recognize the severity of the challenges we face and to remain vigilant in our efforts to address them.
  • Embracing resilience: Hope is not about denying hardship or expecting a quick and easy resolution to our problems. Rather, it is about cultivating the resilience to persevere through difficult times and to believe in the possibility of positive change.

The article concludes by emphasizing the importance of hope as a driving force for positive change. Hope is not a luxury, but a necessity for survival and for building a better future. By nurturing hope, we can empower ourselves and others to confront the challenges we face and to work towards a more just and equitable world.

Thursday, October 5, 2023

Morality beyond the WEIRD: How the nomological network of morality varies across cultures

Atari, M., Haidt, J., et al. (2023).
Journal of Personality and Social Psychology.
Advance online publication.


Moral foundations theory has been a generative framework in moral psychology in the last 2 decades. Here, we revisit the theory and develop a new measurement tool, the Moral Foundations Questionnaire–2 (MFQ-2), based on data from 25 populations. We demonstrate empirically that equality and proportionality are distinct moral foundations while retaining the other four existing foundations of care, loyalty, authority, and purity. Three studies were conducted to develop the MFQ-2 and to examine how the nomological network of moral foundations varies across 25 populations. Study 1 (N = 3,360, five populations) specified a refined top-down approach for measurement of moral foundations. Study 2 (N = 3,902, 19 populations) used a variety of methods (e.g., factor analysis, exploratory structural equations model, network psychometrics, alignment measurement equivalence) to provide evidence that the MFQ-2 fares well in terms of reliability and validity across cultural contexts. We also examined population-level, religious, ideological, and gender differences using the new measure. Study 3 (N = 1,410, three populations) provided evidence for convergent validity of the MFQ-2 scores, expanded the nomological network of the six moral foundations, and demonstrated the improved predictive power of the measure compared with the original MFQ. Importantly, our results showed how the nomological network of moral foundations varied across cultural contexts: consistent with a pluralistic view of morality, different foundations were influential in the network of moral foundations depending on cultural context. These studies sharpen the theoretical and methodological resolution of moral foundations theory and provide the field of moral psychology a more accurate instrument for investigating the many ways that moral conflicts and divisions are shaping the modern world.

Here's my summary:

The article examines how the moral foundations theory (MFT) of morality applies to cultures outside of the Western, Educated, Industrialized, Rich, and Democratic (WEIRD) world. MFT proposes that there are six universal moral foundations: care/harm, fairness/cheating, loyalty/betrayal, authority/subversion, sanctity/degradation, and liberty/oppression. However, previous research has shown that the relative importance of these foundations can vary across cultures.

The authors of the article conducted three studies to examine the nomological network of morality (i.e., the relationships between different moral foundations) in 25 populations. They found that the nomological network of morality varied significantly across cultures. For example, in some cultures, the foundation of care was more strongly related to the foundation of fairness, while in other cultures, the foundation of loyalty was more strongly related to the foundation of authority.

The authors argue that these findings suggest that MFT needs to be revised to take into account cultural variation. They propose that the nomological network of morality is shaped by a combination of universal moral principles and local cultural norms. This means that there is no single "correct" way to think about morality, and that what is considered moral in one culture may not be considered moral in another.

The article's findings have important implications for our understanding of morality and for cross-cultural research. They suggest that we need to be careful about making assumptions about the moral beliefs of people from other cultures. We also need to be aware of the ways in which culture can influence our own moral judgments.

Thursday, September 21, 2023

The Myth of the Secret Genius

Brian Klaas
The Garden of Forking Path
Originally posted 30 Nov 22

Here are two excepts: 

A recent research study, involving a collaboration between physicists who model complex systems and an economist, however, has revealed why billionaires are so often mediocre people masquerading as geniuses. Using computer modelling, they developed a fake society in which there is a realistic distribution of talent among competing agents in the simulation. They then applied some pretty simple rules for their model: talent helps, but luck also plays a role.

Then, they tried to see what would happen if they ran and re-ran the simulation over and over.

What did they find? The most talented people in society almost never became extremely rich. As they put it, “the most successful individuals are not the most talented ones and, on the other hand, the most talented individuals are not the most successful ones.”

Why? The answer is simple. If you’ve got a society of, say, 8 billion people, there are literally billions of humans who are in the middle distribution of talent, the largest area of the Bell curve. That means that in a world that is partly defined by random chance, or luck, the odds that someone from the middle levels of talent will end up as the richest person in the society are extremely high.

Look at this first plot, in which the researchers show capital/success (being rich) on the vertical/Y-axis, and talent on the horizontal/X-axis. What’s clear is that society’s richest person is only marginally more talented than average, and there are a lot of people who are extremely talented that are not rich.

Then, they tried to figure out why this was happening. In their simulated world, lucky and unlucky events would affect agents every so often, in a largely random pattern. When they measured the frequency of luck or misfortune for any individual in the simulation, and then plotted it against becoming rich or poor, they found a strong relationship.


The authors conclude by stating “Our results highlight the risks of the paradigm that we call “naive meritocracy", which fails to give honors and rewards to the most competent people, because it underestimates the role of randomness among the determinants of success.”


Here is my summary:

The myth of the secret genius: The belief that some people are just born with natural talent and that there is nothing we can do to achieve the same level of success.

The importance of hard work: The vast majority of successful people are not geniuses. They are simply people who have worked hard and persevered in the face of setbacks.

The power of luck: Luck plays a role in everyone's success. Some people are luckier than others, and most people do not factor in luck, as well as other external variables, into their assessment.  This bias is another form of the Fundamental Attribution Error.

The importance of networks: Our networks play a big role in our success. We need to be proactive in building relationships with people who can help us achieve our goals.

Friday, September 8, 2023

He was a top church official who criticized Trump. He says Christianity is in crisis

S. Detrow, G. J. Sanchez, & S. Handel
Originally poste 8 Aug 23

Here is an excerpt:

What's the big deal? 

According to Moore, Christianity is in crisis in the United States today.
  • Moore is now the editor-in-chief of the Christianity Today magazine and has written a new book, Losing Our Religion: An Altar Call For Evangelical America, which is his attempt at finding a path forward for the religion he loves.
  • Moore believes part of the problem is that "almost every part of American life is tribalized and factionalized," and that has extended to the church.
  • "I think if we're going to get past the blood and soil sorts of nationalism or all of the other kinds of totalizing cultural identities, it's going to require rethinking what the church is," he told NPR.
  • During his time in office, Trump embraced a Christian nationalist stance — the idea that the U.S. is a Christian country and should enforce those beliefs. In the run-up to the 2024 presidential election, Republican candidates are again vying for the influential evangelical Christian vote, demonstrating its continued influence in politics.
  • In Aug. 2022, church leaders confirmed the Department of Justice was investigating Southern Baptists following a sexual abuse crisis. In a statement, SBC leaders said: "Current leaders across the SBC have demonstrated a firm conviction to address those issues of the past and are implementing measures to ensure they are never repeated in the future."
  • In 2017, the church voted to formally "denounce and repudiate" white nationalism at its annual meeting.

What is he saying? 

Moore spoke to All Things Considered's Scott Detrow about what he thinks the path forward is for evangelicalism in America.

On why he thinks Christianity is in crisis:
It was the result of having multiple pastors tell me, essentially, the same story about quoting the Sermon on the Mount, parenthetically, in their preaching — "turn the other cheek" — [and] to have someone come up after to say, "Where did you get those liberal talking points?" And what was alarming to me is that in most of these scenarios, when the pastor would say, "I'm literally quoting Jesus Christ," the response would not be, "I apologize." The response would be, "Yes, but that doesn't work anymore. That's weak." And when we get to the point where the teachings of Jesus himself are seen as subversive to us, then we're in a crisis.

The information is here. 

Thursday, September 7, 2023

AI Should Be Terrified of Humans

Brian Kateman
Originally posted 24 July 23

Here are two excerpts:

Humans have a pretty awful track record for how we treat others, including other humans. All manner of exploitation, slavery, and violence litters human history. And today, billions upon billions of animals are tortured by us in all sorts of obscene ways, while we ignore the plight of others. There’s no quick answer to ending all this suffering. Let’s not wait until we’re in a similar situation with AI, where their exploitation is so entrenched in our society that we don’t know how to undo it. If we take for granted starting right now that maybe, just possibly, some forms of AI are or will be capable of suffering, we can work with the intention to build a world where they don’t have to.


Today, many scientists and philosophers are looking at the rise of artificial intelligence from the other end—as a potential risk to humans or even humanity as a whole. Some are raising serious concerns over the encoding of social biases like racism and sexism into computer programs, wittingly or otherwise, which can end up having devastating effects on real human beings caught up in systems like healthcare or law enforcement. Others are thinking earnestly about the risks of a digital-being-uprising and what we need to do to make sure we’re not designing technology that will view humans as an adversary and potentially act against us in one way or another. But more and more thinkers are rightly speaking out about the possibility that future AI should be afraid of us.

“We rationalize unmitigated cruelty toward animals—caging, commodifying, mutilating, and killing them to suit our whims—on the basis of our purportedly superior intellect,” Marina Bolotnikova writes in a recent piece for Vox. “If sentience in AI could ever emerge…I’m doubtful we’d be willing to recognize it, for the same reason that we’ve denied its existence in animals.” Working in animal protection, I’m sadly aware of the various ways humans subjugate and exploit other species. Indeed, it’s not only our impressive reasoning skills, our use of complex language, or our ability to solve difficult problems and introspect that makes us human; it’s also our unparalleled ability to increase non-human suffering. Right now there’s no reason to believe that we aren’t on a path to doing the same thing to AI. Consider that despite our moral progress as a species, we torture more non-humans today than ever before. We do this not because we are sadists, but because even when we know individual animals feel pain, we derive too much profit and pleasure from their exploitation to stop.

Wednesday, September 6, 2023

Could a Large Language Model Be Conscious?

David Chalmers
Boston Review
Originally posted 9 Aug 23

Here are two excerpts:

Consciousness also matters morally. Conscious systems have moral status. If fish are conscious, it matters how we treat them. They’re within the moral circle. If at some point AI systems become conscious, they’ll also be within the moral circle, and it will matter how we treat them. More generally, conscious AI will be a step on the path to human level artificial general intelligence. It will be a major step that we shouldn’t take unreflectively or unknowingly.

This gives rise to a second challenge: Should we create conscious AI? This is a major ethical challenge for the community. The question is important and the answer is far from obvious.

We already face many pressing ethical challenges about large language models. There are issues about fairness, about safety, about truthfulness, about justice, about accountability. If conscious AI is coming somewhere down the line, then that will raise a new group of difficult ethical challenges, with the potential for new forms of injustice added on top of the old ones. One issue is that conscious AI could well lead to new harms toward humans. Another is that it could lead to new harms toward AI systems themselves.

I’m not an ethicist, and I won’t go deeply into the ethical questions here, but I don’t take them lightly. I don’t want the roadmap to conscious AI that I’m laying out here to be seen as a path that we have to go down. The challenges I’m laying out in what follows could equally be seen as a set of red flags. Each challenge we overcome gets us closer to conscious AI, for better or for worse. We need to be aware of what we’re doing and think hard about whether we should do it.


Where does the overall case for or against LLM consciousness stand?

Where current LLMs such as the GPT systems are concerned: I think none of the reasons for denying consciousness in these systems is conclusive, but collectively they add up. We can assign some extremely rough numbers for illustrative purposes. On mainstream assumptions, it wouldn’t be unreasonable to hold that there’s at least a one-in-three chance—that is, to have a subjective probability or credence of at least one-third—that biology is required for consciousness. The same goes for the requirements of sensory grounding, self models, recurrent processing, global workspace, and unified agency. If these six factors were independent, it would follow that there’s less than a one-in-ten chance that a system lacking all six, like a current paradigmatic LLM, would be conscious. Of course the factors are not independent, which drives the figure somewhat higher. On the other hand, the figure may be driven lower by other potential requirements X that we have not considered.

Taking all that into account might leave us with confidence somewhere under 10 percent in current LLM consciousness. You shouldn’t take the numbers too seriously (that would be specious precision), but the general moral is that given mainstream assumptions about consciousness, it’s reasonable to have a low credence that current paradigmatic LLMs such as the GPT systems are conscious.

Here are some of the key points from the article:
  1. There is no consensus on what consciousness is, so it is difficult to say definitively whether or not LLMs are conscious.
  2. Some people believe that consciousness requires carbon-based biology, but Chalmers argues that this is a form of biological chauvinism.  I agree with this completely. We can have synthetic forms of consciousness.
  3. Other people believe that LLMs are not conscious because they lack sensory processing or bodily embodiment. Chalmers argues that these objections are not decisive, but they do raise important questions about the nature of consciousness.
  4. Chalmers concludes by suggesting that we should take the possibility of LLM consciousness seriously, but that we should also be cautious about making definitive claims about it.

Friday, September 1, 2023

Building Superintelligence Is Riskier Than Russian Roulette

Tam Hunt & Roman Yampolskiy
Originally posted 2 August 23

Here is an excerpt:

The precautionary principle is a long-standing approach for new technologies and methods that urges positive proof of safety before real-world deployment. Companies like OpenAI have so far released their tools to the public with no requirements at all to establish their safety. The burden of proof should be on companies to show that their AI products are safe—not on public advocates to show that those same products are not safe.

Recursively self-improving AI, the kind many companies are already pursuing, is the most dangerous kind, because it may lead to an intelligence explosion some have called “the singularity,” a point in time beyond which it becomes impossible to predict what might happen because AI becomes god-like in its abilities. That moment could happen in the next year or two, or it could be a decade or more away.

Humans won’t be able to anticipate what a far-smarter entity plans to do or how it will carry out its plans. Such superintelligent machines, in theory, will be able to harness all of the energy available on our planet, then the solar system, then eventually the entire galaxy, and we have no way of knowing what those activities will mean for human well-being or survival.

Can we trust that a god-like AI will have our best interests in mind? Similarly, can we trust that human actors using the coming generations of AI will have the best interests of humanity in mind? With the stakes so incredibly high in developing superintelligent AI, we must have a good answer to these questions—before we go over the precipice.

Because of these existential concerns, more scientists and engineers are now working toward addressing them. For example, the theoretical computer scientist Scott Aaronson recently said that he’s working with OpenAI to develop ways of implementing a kind of watermark on the text that the company’s large language models, like GPT-4, produce, so that people can verify the text’s source. It’s still far too little, and perhaps too late, but it is encouraging to us that a growing number of highly intelligent humans are turning their attention to these issues.

Philosopher Toby Ord argues, in his book The Precipice: Existential Risk and the Future of Humanity, that in our ethical thinking and, in particular, when thinking about existential risks like AI, we must consider not just the welfare of today’s humans but the entirety of our likely future, which could extend for billions or even trillions of years if we play our cards right. So the risks stemming from our AI creations need to be considered not only over the next decade or two, but for every decade stretching forward over vast amounts of time. That’s a much higher bar than ensuring AI safety “only” for a decade or two.

Skeptics of these arguments often suggest that we can simply program AI to be benevolent, and if or when it becomes superintelligent, it will still have to follow its programming. This ignores the ability of superintelligent AI to either reprogram itself or to persuade humans to reprogram it. In the same way that humans have figured out ways to transcend our own “evolutionary programming”—caring about all of humanity rather than just our family or tribe, for example—AI will very likely be able to find countless ways to transcend any limitations or guardrails we try to build into it early on.

Here is my summary:

The article argues that building superintelligence is a risky endeavor, even more so than playing Russian roulette. Further, there is no way to guarantee that we will be able to control a superintelligent AI, and that even if we could, it is possible that the AI would not share our values. This could lead to the AI harming or even destroying humanity.

The authors propose that we should pause our current efforts to develop superintelligence and instead focus on understanding the risks involved. He argues that we need to develop a better understanding of how to align AI with our values, and that we need to develop safety mechanisms that will prevent AI from harming humanity.  (See Shelley's Frankenstein as a literary example.)

Friday, August 11, 2023

How and why people want to be more moral

Sun, J., Wilt, J. A., Meindl, et al. (2023).
Journal of Personality.



What types of moral improvements do people wish to make? Do they hope to become more good, or less bad? Do they wish to be more caring? More honest? More loyal? And why exactly do they want to become more moral? Presumably, most people want to improve their morality because this would benefit others, but is this in fact their primary motivation? Here, we begin to investigate these questions.


Across two large, preregistered studies (N = 1818), participants provided open-ended descriptions of one change they could make in order to become more moral; they then reported their beliefs about and motives for this change.


In both studies, people most frequently expressed desires to improve their compassion and more often framed their moral improvement goals in terms of amplifying good behaviors than curbing bad ones. The strongest predictor of moral motivation was the extent to which people believed that making the change would have positive consequences for their own well-being.


Together, these studies provide rich descriptive insights into how ordinary people want to be more moral, and show that they are particularly motivated to do so for their own sake.

My summary:
  • People most frequently expressed desires to improve their compassion. This suggests that people are motivated to become more moral in order to be more caring and helpful to others.
  • People more often framed their moral improvement goals in terms of amplifying good behaviors than curbing bad ones. This suggests that people are motivated to become more moral by doing more good things, rather than by simply avoiding doing bad things.
  • The strongest predictor of moral motivation was the extent to which people believed that making the change would have positive consequences for their own well-being. This suggests that people are motivated to become more moral for their own sake, as well as for the sake of others.

Friday, July 21, 2023

Belief in Five Spiritual Entities Edges Down to New Lows

Megan Brenan
Originally posted 20 July 23

The percentages of Americans who believe in each of five religious entities -- God, angels, heaven, hell and the devil -- have edged downward by three to five percentage points since 2016. Still, majorities believe in each, ranging from a high of 74% believing in God to lows of 59% for hell and 58% for the devil. About two-thirds each believe in angels (69%) and heaven (67%).

Gallup has used this framework to measure belief in these spiritual entities five times since 2001, and the May 1-24, 2023, poll finds that each is at its lowest point. Compared with 2001, belief in God and heaven is down the most (16 points each), while belief in hell has fallen 12 points, and the devil and angels are down 10 points each.

This question asks respondents whether they believe in each concept or if they are unsure, and from 13% to 15% currently say they are not sure. At the same time, nearly three in 10 U.S. adults do not believe in the devil or hell, while almost two in 10 do not believe in angels and heaven, and 12% say they do not believe in God.

As the percentage of believers has dropped over the past two decades, the corresponding increases have occurred mostly in nonbelief, with much smaller increases in uncertainty. This is true for all but belief in God, which has seen nearly equal increases in uncertainty and nonbelief.

In the current poll, about half of Americans, 51%, believe in all five spiritual entities, while 11% do not believe in any of them. Another 7% are not sure about all of them, while the rest (31%) believe in some and not others.

Gallup periodically measures Americans’ belief in God with different question wordings, producing slightly different results. While the majority of U.S. adults say they believe in God regardless of the question wording, when not offered the option to say they are unsure, significantly more (81% in a survey conducted last year) said they believe in God.

My take: Despite the decline in belief, majorities of Americans still believe in each of the five spiritual entities. This suggests that religion remains an important part of American culture, even as the country becomes more secularized.

Saturday, July 8, 2023

Microsoft Scraps Entire Ethical AI Team Amid AI Boom

Lauren Leffer
Updated on March 14, 2023
Still relevant

Microsoft is currently in the process of shoehorning text-generating artificial intelligence into every single product that it can. And starting this month, the company will be continuing on its AI rampage without a team dedicated to internally ensuring those AI features meet Microsoft’s ethical standards, according to a Monday night report from Platformer.

Microsoft has scrapped its whole Ethics and Society team within the company’s AI sector, as part of ongoing layoffs set to impact 10,000 total employees, per Platformer. The company maintains its Office of Responsible AI, which creates the broad, Microsoft-wide principles to govern corporate AI decision making. But the ethics and society taskforce, which bridged the gap between policy and products, is reportedly no more.

Gizmodo reached out to Microsoft to confirm the news. In response, a company spokesperson sent the following statement:
Microsoft remains committed to developing and designing AI products and experiences safely and responsibly. As the technology has evolved and strengthened, so has our investment, which at times has meant adjusting team structures to be more effective. For example, over the past six years we have increased the number of people within our product teams who are dedicated to ensuring we adhere to our AI principles. We have also increased the scale and scope of our Office of Responsible AI, which provides cross-company support for things like reviewing sensitive use cases and advocating for policies that protect customers.

To Platformer, the company reportedly previously shared this slightly different version of the same statement:

Microsoft is committed to developing AI products and experiences safely and responsibly...Over the past six years we have increased the number of people across our product teams within the Office of Responsible AI who, along with all of us at Microsoft, are accountable for ensuring we put our AI principles into practice...We appreciate the trailblazing work the ethics and society team did to help us on our ongoing responsible AI journey.

Note that, in this older version, Microsoft does inadvertently confirm that the ethics and society team is no more. The company also previously specified staffing increases were in the Office of Responsible AI vs people generally “dedicated to ensuring we adhere to our AI principles.”

Yet, despite Microsoft’s reassurances, former employees told Platformer that the Ethics and Society team played a key role translating big ideas from the responsibility office into actionable changes at the product development level.

The info is here.

Friday, June 30, 2023

The psychology of zero-sum beliefs

Davidai, S., Tepper, S.J. 
Nat Rev Psychol (2023). 


People often hold zero-sum beliefs (subjective beliefs that, independent of the actual distribution of resources, one party’s gains are inevitably accrued at other parties’ expense) about interpersonal, intergroup and international relations. In this Review, we synthesize social, cognitive, evolutionary and organizational psychology research on zero-sum beliefs. In doing so, we examine when, why and how such beliefs emerge and what their consequences are for individuals, groups and society.  Although zero-sum beliefs have been mostly conceptualized as an individual difference and a generalized mindset, their emergence and expression are sensitive to cognitive, motivational and contextual forces. Specifically, we identify three broad psychological channels that elicit zero-sum beliefs: intrapersonal and situational forces that elicit threat, generate real or imagined resource scarcity, and inhibit deliberation. This systematic study of zero-sum beliefs advances our understanding of how these beliefs arise, how they influence people’s behaviour and, we hope, how they can be mitigated.

From the Summary and Future Directions section

We have suggested that zero-sum beliefs are influenced by threat, a sense of resource scarcity and lack of deliberation. Although each of these three channels can separately lead to zero-sum beliefs, simultaneously activating more than one channel might be especially potent. For instance, focusing on losses (versus gains) is both threatening and heightens a sense of resource scarcity. Consequently, focusing on losses might be especially likely to foster zero-sum beliefs. Similarly, insufficient deliberation on the long-term and dynamic effects of international trade might foster a view of domestic currency as scarce, prompting the belief that trade is zero-sum. Thus, any factor that simultaneously affects the threat that people experience, their perceptions of resource scarcity, and their level of deliberation is more likely to result in zero-sum beliefs, and attenuating zero-sum beliefs requires an exploration of all the different factors that lead to these experiences in the first place. For instance, increasing deliberation reduces zero-sum beliefs about negotiations by increasing people’s accountability, perspective taking or consideration of mutually beneficial issues. Future research could manipulate deliberation in other contexts to examine its causal effect on zero-sum beliefs. Indeed, because people express more moderate beliefs after deliberating policy details, prompting participants to deliberate about social issues (for example, asking them to explain the process by which one group’s outcomes influence another group’s outcomes) might reduce zero-sum beliefs. More generally, research could examine long-term and scalable solutions for reducing zero-sum beliefs, focusing on interventions that simultaneously reduce threat, mitigate views of resource scarcity and increase deliberation.  For instance, as formal training in economics is associated with lower zero-sum beliefs, researchers could examine whether teaching people basic economic principles reduces zero-sum beliefs across various domains. Similarly, because higher socioeconomic status is negatively associated with zero-sum beliefs, creating a sense of abundance might counter the belief that life is zero-sum.

Thursday, June 29, 2023

Fairytales have always reflected the morals of the age. It’s not a sin to rewrite them

Martha Gill
The Guardian
Originally posted 4 June 23

Here are two excerpts:

General outrage greeted “woke” updates to Roald Dahl books this year, and still periodically erupts over Disney remakes, most recently a forthcoming film with a Latina actress as Snow White, and a new Peter Pan & Wendy with “lost girls”. The argument is that too much fashionable refurbishment tends to ruin a magical kingdom, and that cult classics could do with the sort of Grade I listing applied to heritage buildings. If you want to tell new stories, fine – but why not start from scratch?

But this point of view misses something, which is that updating classics is itself an ancient part of literary culture; in fact, it is a tradition, part of our heritage too. While the larger portion of the literary canon is carefully preserved, a slice of it has always been more flexible, to be retold and reshaped as times change.

Fairytales fit within this latter custom: they have been updated, periodically, for many hundreds of years. Cult figures such as Dracula, Frankenstein and Sherlock Holmes fit there too, as do superheroes: each generation, you might say, gets the heroes it deserves. And so does Bond. Modernity is both a villain and a hero within the Bond franchise: the dramatic tension between James – a young cosmopolitan “dinosaur” – and the passing of time has always been part of the fun.

This tradition has a richness to it: it is a historical record of sorts. Look at the progress of the fairy story through the ages and you get a twisty tale of dubious progress, a moral journey through the woods. You could say fairytales have always been politically correct – that is, tweaked to reflect whatever morals a given cohort of parents most wanted to teach their children.


The idea that we are pasting over history – censoring important artefacts – is wrongheaded too. It is not as if old films or books have been burned, wiped from the internet or removed from libraries. With today’s propensity for writing things down, common since the 1500s, there is no reason to fear losing the “original” stories.

As for the suggestion that minority groups should make their own stories instead – this is a sly form of exclusion. Ancient universities and gentlemen’s clubs once made similar arguments; why couldn’t exiled individuals simply set up their own versions? It is not so easy. Old stories weave themselves deep into the tapestry of a nation; newer ones will necessarily be confined to the margins.

My take: Updating classic stories can be beneficial and even necessary to promote inclusion, diversity, equity, and fairness. By not updating these stories, we risk perpetuating harmful stereotypes and narratives that reinforce the dominant culture. When we update classic stories, we can create new possibilities for representation and understanding that can help to build a more just and equitable world.  Dominant cultures need to cede power to promote more unity in a multicultural nation.

Saturday, June 24, 2023

The Darwinian Argument for Worrying About AI

Dan Hendrycks
Originally posted 31 May 23

Here is an excerpt:

In the biological realm, evolution is a slow process. For humans, it takes nine months to create the next generation and around 20 years of schooling and parenting to produce fully functional adults. But scientists have observed meaningful evolutionary changes in species with rapid reproduction rates, like fruit flies, in fewer than 10 generations. Unconstrained by biology, AIs could adapt—and therefore evolve—even faster than fruit flies do.

There are three reasons this should worry us. The first is that selection effects make AIs difficult to control. Whereas AI researchers once spoke of “designing” AIs, they now speak of “steering” them. And even our ability to steer is slipping out of our grasp as we let AIs teach themselves and increasingly act in ways that even their creators do not fully understand. In advanced artificial neural networks, we understand the inputs that go into the system, but the output emerges from a “black box” with a decision-making process largely indecipherable to humans.

Second, evolution tends to produce selfish behavior. Amoral competition among AIs may select for undesirable traits. AIs that successfully gain influence and provide economic value will predominate, replacing AIs that act in a more narrow and constrained manner, even if this comes at the cost of lowering guardrails and safety measures. As an example, most businesses follow laws, but in situations where stealing trade secrets or deceiving regulators is highly lucrative and difficult to detect, a business that engages in such selfish behavior will most likely outperform its more principled competitors.

Selfishness doesn’t require malice or even sentience. When an AI automates a task and leaves a human jobless, this is selfish behavior without any intent. If competitive pressures continue to drive AI development, we shouldn’t be surprised if they act selfishly too.

The third reason is that evolutionary pressure will likely ingrain AIs with behaviors that promote self-preservation. Skeptics of AI risks often ask, “Couldn’t we just turn the AI off?” There are a variety of practical challenges here. The AI could be under the control of a different nation or a bad actor. Or AIs could be integrated into vital infrastructure, like power grids or the internet. When embedded into these critical systems, the cost of disabling them may prove too high for us to accept since we would become dependent on them. AIs could become embedded in our world in ways that we can’t easily reverse. But natural selection poses a more fundamental barrier: we will select against AIs that are easy to turn off, and we will come to depend on AIs that we are less likely to turn off.

These strong economic and strategic pressures to adopt the systems that are most effective mean that humans are incentivized to cede more and more power to AI systems that cannot be reliably controlled, putting us on a pathway toward being supplanted as the earth’s dominant species. There are no easy, surefire solutions to our predicament.

Sunday, June 18, 2023

Gender-Affirming Care for Trans Youth Is Neither New nor Experimental: A Timeline and Compilation of Studies

Julia Serano
Originally posted 16 May 23

Trans and gender-diverse people are a pancultural and transhistorical phenomenon. It is widely understood that we, like LGBTQ+ people more generally, arise due to natural variation rather than the result of pathology, modernity, or the latest conspiracy theory.

Gender-affirming healthcare has a long history. The first trans-related surgeries were carried out in the 1910s–1930s (Meyerowitz, 2002, pp. 16–21). While some doctors were supportive early on, most were wary. Throughout the mid-twentieth century, these skeptical doctors subjected trans people to all sorts of alternate treatments — from perpetual psychoanalysis, to aversion and electroshock therapies, to administering assigned-sex-consistent hormones (e.g., testosterone for trans female/feminine people), and so on — but none of them worked. The only treatment that reliably allowed trans people to live happy and healthy lives was allowing them to transition. While doctors were initially worried that many would eventually come to regret that decision, study after study has shown that gender-affirming care has a far lower regret rate (typically around 1 or 2 percent) than virtually any other medical procedure. Given all this, plus the fact that there is no test for being trans (medical, psychological, or otherwise), around the turn of the century, doctors began moving away from strict gatekeeping and toward an informed consent model for trans adults to attain gender-affirming care.

Trans children have always existed — indeed most trans adults can tell you about their trans childhoods. During the twentieth century, while some trans kids did socially transition (Gill-Peterson, 2018), most had their gender identities disaffirmed, either by parents who disbelieved them or by doctors who subjected them to “gender reparative” or “conversion” therapies. The rationale behind the latter was a belief at that time that gender identity was flexible and subject to change during early childhood, but we now know that this is not true (see e.g., Diamond & Sigmundson, 1997; Reiner & Gearhart, 2004). Over the years, it became clear that these conversion efforts were not only ineffective, but they caused real harm — this is why most health professional organizations oppose them today.

Given the harm caused by gender-disaffirming approaches, around the turn of the century, doctors and gender clinics began moving toward what has come to be known as the gender affirmative model — here’s how I briefly described this approach in my 2016 essay Detransition, Desistance, and Disinformation: A Guide for Understanding Transgender Children Debates:

Rather than being shamed by their families and coerced into gender conformity, these children are given the space to explore their genders. If they consistently, persistently, and insistently identify as a gender other than the one they were assigned at birth, then their identity is respected, and they are given the opportunity to live as a member of that gender. If they remain happy in their identified gender, then they may later be placed on puberty blockers to stave off unwanted bodily changes until they are old enough (often at age sixteen) to make an informed decision about whether or not to hormonally transition. If they change their minds at any point along the way, then they are free to make the appropriate life changes and/or seek out other identities.

Saturday, June 17, 2023

Debt Collectors Want To Use AI Chatbots To Hustle People For Money

Corin Faife
Originally posted 18 MAY 23

Here are two excerpts:

The prospect of automated AI systems making phone calls to distressed people adds another dystopian element to an industry that has long targeted poor and marginalized people. Debt collection and enforcement is far more likely to occur in Black communities than white ones, and research has shown that predatory debt and interest rates exacerbate poverty by keeping people trapped in a never-ending cycle. 

In recent years, borrowers in the US have been piling on debt. In the fourth quarter of 2022, household debt rose to a record $16.9 trillion according to the New York Federal Reserve, accompanied by an increase in delinquency rates on larger debt obligations like mortgages and auto loans. Outstanding credit card balances are at record levels, too. The pandemic generated a huge boom in online spending, and besides traditional credit cards, younger spenders were also hooked by fintech startups pushing new finance products, like the extremely popular “buy now, pay later” model of Klarna, Sezzle, Quadpay and the like.

So debt is mounting, and with interest rates up, more and more people are missing payments. That means more outstanding debts being passed on to collection, giving the industry a chance to sprinkle some AI onto the age-old process of prodding, coaxing, and pressuring people to pay up.

For an insight into how this works, we need look no further than the sales copy of companies that make debt collection software. Here, products are described in a mix of generic corp-speak and dystopian portent: SmartAction, another conversational AI product like Skit, has a debt collection offering that claims to help with “alleviating the negative feelings customers might experience with a human during an uncomfortable process”—because they’ll surely be more comfortable trying to negotiate payments with a robot instead. 


“Striking the right balance between assertiveness and empathy is a significant challenge in debt collection,” the company writes in the blog post, which claims GPT-4 has the ability to be “firm and compassionate” with customers.

When algorithmic, dynamically optimized systems are applied to sensitive areas like credit and finance, there’s a real possibility that bias is being unknowingly introduced. A McKinsey report into digital collections strategies plainly suggests that AI can be used to identify and segment customers by risk profile—i.e. credit score plus whatever other data points the lender can factor in—and fine-tune contact techniques accordingly. 

Friday, June 16, 2023

ChatGPT Is a Plagiarism Machine

Joseph Keegin
The Chronicle
Originally posted 23 MAY 23

Here is an excerpt:

A meaningful education demands doing work for oneself and owning the product of one’s labor, good or bad. The passing off of someone else’s work as one’s own has always been one of the greatest threats to the educational enterprise. The transformation of institutions of higher education into institutions of higher credentialism means that for many students, the only thing dissuading them from plagiarism or exam-copying is the threat of punishment. One obviously hopes that, eventually, students become motivated less by fear of punishment than by a sense of responsibility for their own education. But if those in charge of the institutions of learning — the ones who are supposed to set an example and lay out the rules — can’t bring themselves to even talk about a major issue, let alone establish clear and reasonable guidelines for those facing it, how can students be expected to know what to do?

So to any deans, presidents, department chairs, or other administrators who happen to be reading this, here are some humble, nonexhaustive, first-aid-style recommendations. First, talk to your faculty — especially junior faculty, contingent faculty, and graduate-student lecturers and teaching assistants — about what student writing has looked like this past semester. Try to find ways to get honest perspectives from students, too; the ones actually doing the work are surely frustrated at their classmates’ laziness and dishonesty. Any meaningful response is going to demand knowing the scale of the problem, and the paper-graders know best what’s going on. Ask teachers what they’ve seen, what they’ve done to try to mitigate the possibility of AI plagiarism, and how well they think their strategies worked. Some departments may choose to take a more optimistic approach to AI chatbots, insisting they can be helpful as a student research tool if used right. It is worth figuring out where everyone stands on this question, and how best to align different perspectives and make allowances for divergent opinions while holding a firm line on the question of plagiarism.

Second, meet with your institution’s existing honor board (or whatever similar office you might have for enforcing the strictures of academic integrity) and devise a set of standards for identifying and responding to AI plagiarism. Consider simplifying the procedure for reporting academic-integrity issues; research AI-detection services and software, find one that works best for your institution, and make sure all paper-grading faculty have access and know how to use it.

Lastly, and perhaps most importantly, make it very, very clear to your student body — perhaps via a firmly worded statement — that AI-generated work submitted as original effort will be punished to the fullest extent of what your institution allows. Post the statement on your institution’s website and make it highly visible on the home page. Consider using this challenge as an opportunity to reassert the basic purpose of education: to develop the skills, to cultivate the virtues and habits of mind, and to acquire the knowledge necessary for leading a rich and meaningful human life.

Tuesday, June 13, 2023

Using the Veil of Ignorance to align AI systems with principles of justice

Weidinger, L. McKee, K.R., et al. (2023).
PNAS, 120(18), e2213709120


The philosopher John Rawls proposed the Veil of Ignorance (VoI) as a thought experiment to identify fair principles for governing a society. Here, we apply the VoI to an important governance domain: artificial intelligence (AI). In five incentive-compatible studies (N = 2, 508), including two preregistered protocols, participants choose principles to govern an Artificial Intelligence (AI) assistant from behind the veil: that is, without knowledge of their own relative position in the group. Compared to participants who have this information, we find a consistent preference for a principle that instructs the AI assistant to prioritize the worst-off. Neither risk attitudes nor political preferences adequately explain these choices. Instead, they appear to be driven by elevated concerns about fairness: Without prompting, participants who reason behind the VoI more frequently explain their choice in terms of fairness, compared to those in the Control condition. Moreover, we find initial support for the ability of the VoI to elicit more robust preferences: In the studies presented here, the VoI increases the likelihood of participants continuing to endorse their initial choice in a subsequent round where they know how they will be affected by the AI intervention and have a self-interested motivation to change their mind. These results emerge in both a descriptive and an immersive game. Our findings suggest that the VoI may be a suitable mechanism for selecting distributive principles to govern AI.


The growing integration of Artificial Intelligence (AI) into society raises a critical question: How can principles be fairly selected to govern these systems? Across five studies, with a total of 2,508 participants, we use the Veil of Ignorance to select principles to align AI systems. Compared to participants who know their position, participants behind the veil more frequently choose, and endorse upon reflection, principles for AI that prioritize the worst-off. This pattern is driven by increased consideration of fairness, rather than by political orientation or attitudes to risk. Our findings suggest that the Veil of Ignorance may be a suitable process for selecting principles to govern real-world applications of AI.

From the Discussion section

What do these findings tell us about the selection of principles for AI in the real world? First, the effects we observe suggest that—even though the VoI was initially proposed as a mechanism to identify principles of justice to govern society—it can be meaningfully applied to the selection of governance principles for AI. Previous studies applied the VoI to the state, such that our results provide an extension of prior findings to the domain of AI. Second, the VoI mechanism demonstrates many of the qualities that we want from a real-world alignment procedure: It is an impartial process that recruits fairness-based reasoning rather than self-serving preferences. It also leads to choices that people continue to endorse across different contexts even where they face a self-interested motivation to change their mind. This is both functionally valuable in that aligning AI to stable preferences requires less frequent updating as preferences change, and morally significant, insofar as we judge stable reflectively endorsed preferences to be more authoritative than their nonreflectively endorsed counterparts. Third, neither principle choice nor subsequent endorsement appear to be particularly affected by political affiliation—indicating that the VoI may be a mechanism to reach agreement even between people with different political beliefs. Lastly, these findings provide some guidance about what the content of principles for AI, selected from behind a VoI, may look like: When situated behind the VoI, the majority of participants instructed the AI assistant to help those who were least advantaged.

Sunday, June 4, 2023

We need to examine the beliefs of today’s tech luminaries

Anjana Ahuja
Financial Times
Originally posted 10 MAY 23

People who are very rich or very clever, or both, sometimes believe weird things. Some of these beliefs are captured in the acronym Tescreal. The letters represent overlapping futuristic philosophies — bookended by transhumanism and longtermism — favoured by many of AI’s wealthiest and most prominent supporters.

The label, coined by a former Google ethicist and a philosopher, is beginning to circulate online and usefully explains why some tech figures would like to see the public gaze trained on fuzzy future problems such as existential risk, rather than on current liabilities such as algorithmic bias. A fraternity that is ultimately committed to nurturing AI for a posthuman future may care little for the social injustices committed by their errant infant today.

As well as transhumanism, which advocates for the technological and biological enhancement of humans, Tescreal encompasses extropianism, a belief that science and technology will bring about indefinite lifespan; singularitarianism, the idea that an artificial superintelligence will eventually surpass human intelligence; cosmism, a manifesto for curing death and spreading outwards into the cosmos; rationalism, the conviction that reason should be the supreme guiding principle for humanity; effective altruism, a social movement that calculates how to maximally benefit others; and longtermism, a radical form of utilitarianism which argues that we have moral responsibilities towards the people who are yet to exist, even at the expense of those who currently do.

(cut, and the ending)

Gebru, along with others, has described such talk as fear-mongering and marketing hype. Many will be tempted to dismiss her views — she was sacked from Google after raising concerns over energy use and social harms linked to large language models — as sour grapes, or an ideological rant. But that glosses over the motivations of those running the AI show, a dazzling corporate spectacle with a plot line that very few are able to confidently follow, let alone regulate.

Repeated talk of a possible techno-apocalypse not only sets up these tech glitterati as guardians of humanity, it also implies an inevitability in the path we are taking. And it distracts from the real harms racking up today, identified by academics such as Ruha Benjamin and Safiya Noble. Decision-making algorithms using biased data are deprioritising black patients for certain medical procedures, while generative AI is stealing human labour, propagating misinformation and putting jobs at risk.

Perhaps those are the plot twists we were not meant to notice.

Wednesday, May 24, 2023

Fighting for our cognitive liberty

Liz Mineo
The Harvard Gazette
Originally published 26 April 23

Imagine going to work and having your employer monitor your brainwaves to see whether you’re mentally tired or fully engaged in filling out that spreadsheet on April sales.

Nita Farahany, professor of law and philosophy at Duke Law School and author of “The Battle for Your Brain: Defending the Right to Think Freely in the Age of Neurotechnology,” says it’s already happening, and we all should be worried about it.

Farahany highlighted the promise and risks of neurotechnology in a conversation with Francis X. Shen, an associate professor in the Harvard Medical School Center for Bioethics and the MGH Department of Psychiatry, and an affiliated professor at Harvard Law School. The Monday webinar was co-sponsored by the Harvard Medical School Center for Bioethics, the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School, and the Dana Foundation.

Farahany said the practice of tracking workers’ brains, once exclusively the stuff of science fiction, follows the natural evolution of personal technology, which has normalized the use of wearable devices that chronicle heartbeats, footsteps, and body temperatures. Sensors capable of detecting and decoding brain activity already have been embedded into everyday devices such as earbuds, headphones, watches, and wearable tattoos.

“Commodification of brain data has already begun,” she said. “Brain sensors are already being sold worldwide. It isn’t everybody who’s using them yet. When it becomes an everyday part of our everyday lives, that’s the moment at which you hope that the safeguards are already in place. That’s why I think now is the right moment to do so.”

Safeguards to protect people’s freedom of thought, privacy, and self-determination should be implemented now, said Farahany. Five thousand companies around the world are using SmartCap technologies to track workers’ fatigue levels, and many other companies are using other technologies to track focus, engagement and boredom in the workplace.

If protections are put in place, said Farahany, the story with neurotechnology could be different than the one Shoshana Zuboff warns of in her 2019 book, “The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power.” In it Zuboff, Charles Edward Wilson Professor Emerita at Harvard Business School, examines the threat of the widescale corporate commodification of personal data in which predictions of our consumer activities are bought, sold, and used to modify behavior.