Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Saturday, October 31, 2020

The new trinity of religious moral character: the Cooperator, the Crusader, and the Complicit

S. Abrams, J. Jackson, & K. Gray
Current Opinion in Psychology 2021, 


Does religion make people good or bad? We suggest that there are at least three distinct profiles of religious morality: the Cooperator, the Crusader, and the Complicit. Cooperators forego selfishness to benefit others, crusaders harm outgroups to bolster their own religious community, and the complicit use religion to justify selfish behavior and reduce blame. Different aspects of religion motivate each character: religious reverence makes people cooperators, religious tribalism makes people crusaders, and religious absolution makes people complicit. This framework makes sense of previous research by explaining when and how religion can make people more or less moral.


• Different aspects of religion inspire both morality and immorality.

• These distinct influences are summarized through three profiles of moral character.

• The ‘Cooperator’ profile shows how religious reverence encourages people to sacrifice self-interest.

• The ‘Crusader’ profile shows how religious tribalism motivates ingroup loyalty and outgroup hostility.

• The ‘Complicit’ profile shows how religious absolution allows people to justify selfish behavior.

From the Conclusion

Religion and morality are complex, and so is their relationship. This review makes sense of religious and moral complexity through a taxonomy of three moral characters — the Cooperator, the Crusader, and the Complicit — each of which is facilitated by different aspects of religion. Religious reverence encourages people to be cooperators, religious tribalism justifies people to behave like crusaders, and religious absolution allows people to be complicit.

Friday, October 30, 2020

The corporate responsibility facade is finally starting to crumble

Alison Taylor
Yahoo Finance
Originally posted 4 March 20

Here is an excerpt:

Any claim to be a responsible corporation is predicated on addressing these abuses of power. But most companies are instead clinging with remarkable persistence to the façades they’ve built to deflect attention. Compliance officers focus on pleasing regulators, even though there is limited evidence that their recommendations reduce wrongdoing. Corporate sustainability practitioners drown their messages in an alphabet soup of acronyms, initiatives, and alienating jargon about “empowered communities” and “engaged stakeholders,” when both functions are still considered peripheral to corporate strategy.

When reading a corporation’s sustainability report and then comparing it to its risk disclosures—or worse, its media coverage—we might as well be reading about entirely distinct companies. Investors focused on sustainability speak of “materiality” principles, meant to sharpen our focus on the most relevant environmental, social, and governance (ESG) issues for each industry. But when an issue is “material” enough to threaten core operating models, companies routinely ignore, evade, and equivocate.

Coca-Cola’s most recent annual sustainability report acknowledges its most pressing issue is “obesity concerns and category perceptions.” Accordingly, it highlights its lower-sugar product lines and references responsible marketing. But it continues its vigorous lobbying against soda taxes, and of course continues to make products with known links to obesity and other health problems. Facebook’s sustainability disclosures focus on efforts to fight climate change and improve labor rights in its supply chain, but make no reference to the mental-health impacts of social media or to its role in peddling disinformation and undermining democracy. Johnson and Johnson flags “product quality and safety” as its highest priority issue without mentioning that it is a defendant in criminal litigation over distribution of opioids. UBS touts its sustainability targets but not its ongoing financing of fossil-fuel projects.

Thursday, October 29, 2020

Probabilistic Biases Meet the Bayesian Brain.

Chater N, et al.
Current Directions in Psychological Science. 


In Bayesian cognitive science, the mind is seen as a spectacular probabilistic-inference machine. But judgment and decision-making (JDM) researchers have spent half a century uncovering how dramatically and systematically people depart from rational norms. In this article, we outline recent research that opens up the possibility of an unexpected reconciliation. The key hypothesis is that the brain neither represents nor calculates with probabilities but approximates probabilistic calculations by drawing samples from memory or mental simulation. Sampling models diverge from perfect probabilistic calculations in ways that capture many classic JDM findings, which offers the hope of an integrated explanation of classic heuristics and biases, including availability, representativeness, and anchoring and adjustment.


Human probabilistic reasoning gets bad press. Decades of brilliant experiments, most notably by Daniel Kahneman and Amos Tversky (e.g., Kahneman, 2011; Kahneman, Slovic, & Tversky, 1982), have shown a plethora of ways in which people get into a terrible muddle when wondering how probable things are. Every psychologist has learned about anchoring, conservatism, the representativeness heuristic, and many other ways that people reveal their probabilistic incompetence. Creating probability theory in the first place was incredibly challenging, exercising great mathematical minds over several centuries (Hacking, 1990). Probabilistic reasoning is hard, and perhaps it should not be surprising that people often do it badly. This view is the starting point for the whole field of judgment and decision-making (JDM) and its cousin, behavioral economics.

Oddly, though, human probabilistic reasoning equally often gets good press. Indeed, many psychologists, neuroscientists, and artificial-intelligence researchers believe that probabilistic reasoning is, in fact, the secret of human intelligence.

Wednesday, October 28, 2020

Small Victories: Texas social workers will no longer be allowed to discriminate against LGBTQ Texans and people with disabilities

Edgar Walters
Texas Tribune
Originally posted 27 Oct 20

After backlash from lawmakers and advocates, a state board voted Tuesday to undo a rule change that would have allowed social workers to turn away clients who are LGBTQ or have a disability.

The Texas Behavioral Health Executive Council voted unanimously to restore protections for LGBTQ and disabled clients to Texas social workers’ code of conduct just two weeks after removing them.

Gloria Canseco, who was appointed by Gov. Greg Abbott to lead the behavioral health council, expressed regret that the previous rule change was “perceived as hostile to the LGBTQ+ community or to disabled persons.”

“At every opportunity our intent is to prohibit discrimination against any person for any reason,” she said.

Abbott's office recommended earlier this month that the board strip three categories from a code of conduct that establishes when a social worker may refuse to serve someone.

Congratulations to all who help right a wrong in the mental health profession.

Should we campaign against sex robots?

Danaher, J., Earp, B. D., & Sandberg, A. (forthcoming). 
In J. Danaher & N. McArthur (Eds.) 
Robot Sex: Social and Ethical Implications
Cambridge, MA: MIT Press.


In September 2015 a well-publicised Campaign Against Sex Robots (CASR) was launched. Modelled on the longer-standing Campaign to Stop Killer Robots, the CASR opposes the development of sex robots on the grounds that the technology is being developed with a particular model of female-male relations (the
prostitute-john model) in mind, and that this will prove harmful in various ways. In this chapter, we consider carefully the merits of campaigning against such a technology. We make three main arguments. First, we argue that the particular claims advanced by the CASR are unpersuasive, partly due to a lack of clarity about the campaign’s aims and partly due to substantive defects in the main ethical objections put forward by campaign’s founder(s). Second, broadening our inquiry beyond the arguments proferred by the campaign itself, we argue that it would be very difficult to endorse a general campaign against sex robots unless one embraced a highly conservative attitude towards the ethics of sex, which is likely to be unpalatable to those who are active in the campaign. In making this argument we draw upon lessons
from the campaign against killer robots. Finally, we conclude by suggesting that although a generalised campaign against sex robots is unwarranted, there are legitimate concerns that one can raise about the development of sex robots.


Robots are going to form an increasingly integral part of human social life.  Sex robots are likely to be among them. Though the proponents of the CASR seem deeply concerned by this prospect, we have argued that there is nothing in the nature of sex robots themselves that warrants preemptive opposition to their development.  The arguments of the campaign itself are vague and premised on a misleading
analogy between sex robots and human sex work. Furthermore, drawing upon the example of the Campaign to Stop Killer Robots, we suggest that there are no bad-making properties of sex robots that give rise to similarly serious levels of concern.  The bad-making properties of sex robots are speculative and indirect: preventing their development may not prevent the problems from arising. Preventing the development of killer robots is very different: if you stop the robots you stop the prima facie harm.

In conclusion, we should preemptively campaign against robots when we have reason to think that a moral or practical harm caused by their use can best be avoided or reduced as a result of those efforts. By contrast, to engage in such a campaign as a way of fighting against—or preempting—indirect harms, whose ultimate source is not the technology itself but rather individual choices or broader social institutions, is likely to be a comparative waste of effort.

Tuesday, October 27, 2020

(Peer) group influence on children's prosocial and antisocial behavior

A. Misch & Y. Dunham


This study investigates the influence of moral in- vs. outgroup behavior on 5-6 and 8-9-year-olds' own moral behavior (N=296). After minimal group assignment, children in Experiment 1 observed adult ingroup or outgroup members engaging in prosocial sharing or antisocial stealing, before they themselves had the opportunity to privately donate stickers or take away stickers from others. Older children shared more than younger children, and prosocial models elicited higher sharing. Surprisingly, group membership had no effect. Experiment 2 investigated the same question using peer models. Children in the younger age group were significantly influenced by ingroup behavior, while older children were not affected by group membership. Additional measures reveal interesting insights into how moral in- and outgroup behavior affects intergroup attitudes, evaluations and choices.

From the Discussion

Thus, while results of our main measure generally support the hypothesis that children are susceptible to social influence, we found that children are not blindly conformist; rather, in contrast to previous research (Wilks et al., 2019) we found that conformity to antisocial behavior was low in general and restricted to younger children watching peer models.  Vulnerability to peer group influence in younger children has also been reported in previous studies on conformity (Haun & Tomasello, 2011; Engelmann et al., 2016) as well as research demonstrating a primacy of group interests over moral concerns (Misch et al., 2018). Thus, our study highlights the younger age group as a time in children’s development in which they seem to be particularly sensitive to peer influences, for better or worse, perhaps indicating a sort of “sensitive period” in which children are working to extract the norms embedded in peer behavior. 

Monday, October 26, 2020

Artificial Intelligence and the Limits of Legal Personality

Chesterman, Simon, (August 28, 2020). 
Forthcoming in 
International & Comparative Law Quarterly
NUS Law Working Paper No. 2020/025


As artificial intelligence (AI) systems become more sophisticated and play a larger role in society, arguments that they should have some form of legal personality gain credence. It has been suggested that this will fill an accountability gap created by the speed, autonomy, and opacity of AI. In addition, a growing body of literature considers the possibility of AI systems owning the intellectual property that they create. The arguments are typically framed in instrumental terms, with comparisons to juridical persons such as corporations. Implicit in those arguments, or explicit in their illustrations and examples, is the idea that as AI systems approach the point of indistinguishability from humans they should be entitled to a status comparable to natural persons. This article contends that although most legal systems could create a novel category of legal persons, such arguments are insufficient to show that they should.

Sunday, October 25, 2020

The objectivity illusion and voter polarization in the 2016 presidential election

M. C. Schwalbe, G. L. Cohen, L. D. Ross
PNAS Sep 2020, 117 (35) 21218-21229; 


Two studies conducted during the 2016 presidential campaign examined the dynamics of the objectivity illusion, the belief that the views of “my side” are objective while the views of the opposing side are the product of bias. In the first, a three-stage longitudinal study spanning the presidential debates, supporters of the two candidates exhibited a large and generally symmetrical tendency to rate supporters of the candidate they personally favored as more influenced by appropriate (i.e., “normative”) considerations, and less influenced by various sources of bias than supporters of the opposing candidate. This study broke new ground by demonstrating that the degree to which partisans displayed the objectivity illusion predicted subsequent bias in their perception of debate performance and polarization in their political attitudes over time, as well as closed-mindedness and antipathy toward political adversaries. These associations, furthermore, remained significant even after controlling for baseline levels of partisanship. A second study conducted 2 d before the election showed similar perceptions of objectivity versus bias in ratings of blog authors favoring the candidate participants personally supported or opposed. These ratings were again associated with polarization and, additionally, with the willingness to characterize supporters of the opposing candidate as evil and likely to commit acts of terrorism. At a time of particular political division and distrust in America, these findings point to the exacerbating role played by the illusion of objectivity.


Political polarization increasingly threatens democratic institutions. The belief that “my side” sees the world objectively while the “other side” sees it through the lens of its biases contributes to this political polarization and accompanying animus and distrust. This conviction, known as the “objectivity illusion,” was strong and persistent among Trump and Clinton supporters in the weeks before the 2016 presidential election. We show that the objectivity illusion predicts subsequent bias and polarization, including heightened partisanship over the presidential debates. A follow-up study showed that both groups impugned the objectivity of a putative blog author supporting the opposition candidate and saw supporters of that opposing candidate as evil.

Saturday, October 24, 2020

Trump's Strangest Lie: A Plague of Suicides Under His Watch

Gilad Edelman
Originally published 23 Oct 2020

IN LAST NIGHT’S presidential debate, Donald Trump repeated one of his more unorthodox reelection pitches. “People are losing their jobs,” he said. “They’re committing suicide. There’s depression, alcohol, drugs at a level that nobody’s ever seen before.”

It’s strange to hear an incumbent president declare, as an argument in his own favor, that a wave of suicides is occurring under his watch. It’s even stranger given that it’s not true. While Trump has been warning since March that any pandemic lockdowns would lead to “suicides by the thousands,” several studies from abroad have found that when governments imposed such restrictions in the early waves of the pandemic, there was no corresponding increase in these deaths. In fact, suicide rates may even have declined. A preprint study released earlier this week found that the suicide rate in Massachusetts didn’t budge even as that state imposed a strong stay-at-home order in March, April, and May.


Add this to the list of tragic ironies of the Trump era: The president is using the nonexistent link between lockdowns and suicide to justify an agenda that really could cause more people to take their own lives.

An ethical framework for global vaccine allocation

Emanuel, E. et al.
Science  11 Sep 2020:
Vol. 369, Issue 6509, pp. 1309-1312
DOI: 10.1126/science.abe2803

Once effective coronavirus disease 2019 (COVID-19) vaccines are developed, they will be scarce. This presents the question of how to distribute them fairly across countries. Vaccine allocation among countries raises complex and controversial issues involving public opinion, diplomacy, economics, public health, and other considerations. Nevertheless, many national leaders, international organizations, and vaccine producers recognize that one central factor in this decision-making is ethics. Yet little progress has been made toward delineating what constitutes fair international distribution of vaccine. Many have endorsed “equitable distribution of COVID-19…vaccine” without describing a framework or recommendations. Two substantive proposals for the international allocation of a COVID-19 vaccine have been advanced, but are seriously flawed. We offer a more ethically defensible and practical proposal for the fair distribution of COVID-19 vaccine: the Fair Priority Model.

The Fair Priority Model is primarily addressed to three groups. One is the COVAX facility—led by Gavi, the World Health Organization (WHO), and the Coalition for Epidemic Preparedness Innovations (CEPI)—which intends to purchase vaccines for fair distribution across countries. A second group is vaccine producers. Thankfully, many producers have publicly committed to a “broad and equitable” international distribution of vaccine. The last group is national governments, some of whom have also publicly committed to a fair distribution.

These groups need a clear framework for reconciling competing values, one that they and others will rightly accept as ethical and not just as an assertion of power. The Fair Priority Model specifies what a fair distribution of vaccines entails, giving content to their commitments. Moreover, acceptance of this common ethical framework will reduce duplication and waste, easing efforts at a fair distribution. That, in turn, will enhance producers' confidence that vaccines will be fairly allocated to benefit people, thereby motivating an increase in vaccine supply for international distribution.

Friday, October 23, 2020

Ethical Dimensions of Using Artificial Intelligence in Health Care

Michael J. Rigby
AMA Journal of Ethics
February 2019

An artificially intelligent computer program can now diagnose skin cancer more accurately than a board-certified dermatologist. Better yet, the program can do it faster and more efficiently, requiring a training data set rather than a decade of expensive and labor-intensive medical education. While it might appear that it is only a matter of time before physicians are rendered obsolete by this type of technology, a closer look at the role this technology can play in the delivery of health care is warranted to appreciate its current strengths, limitations, and ethical complexities.

Artificial intelligence (AI), which includes the fields of machine learning, natural language processing, and robotics, can be applied to almost any field in medicine, and its potential contributions to biomedical research, medical education, and delivery of health care seem limitless. With its robust ability to integrate and learn from large sets of clinical data, AI can serve roles in diagnosis, clinical decision making, and personalized medicine. For example, AI-based diagnostic algorithms applied to mammograms are assisting in the detection of breast cancer, serving as a “second opinion” for radiologists. In addition, advanced virtual human avatars are capable of engaging in meaningful conversations, which has implications for the diagnosis and treatment of psychiatric disease. AI applications also extend into the physical realm with robotic prostheses, physical task support systems, and mobile manipulators assisting in the delivery of telemedicine.

Nonetheless, this powerful technology creates a novel set of ethical challenges that must be identified and mitigated since AI technology has tremendous capability to threaten patient preference, safety, and privacy. However, current policy and ethical guidelines for AI technology are lagging behind the progress AI has made in the health care field. While some efforts to engage in these ethical conversations have emerged, the medical community remains ill informed of the ethical complexities that budding AI technology can introduce. Accordingly, a rich discussion awaits that would greatly benefit from physician input, as physicians will likely be interfacing with AI in their daily practice in the near future.

Thursday, October 22, 2020

America Is Being Pulled Apart. Here's How We Can Start to Heal Our Nation

David French
Time Magazine
Originally posted 10 Sept 20

Here is an excerpt:

I’ve been writing and speaking about national polarization and division since before the Trump election. Two years ago, I began writing a book describing our challenge, outlining how we could divide and how we can heal. The prescription isn’t easy. We have to flip the script on the present political narrative. We have to prioritize accommodation.

That means revitalizing the Bill of Rights. America’s worst sins have always included denying fundamental constitutional rights to America’s most vulnerable citizens, those without electoral power. While progress has been made, doctrines like qualified immunity leave countless citizens without recourse when they face state abuse. It alienates citizens from the state and drains confidence in the American republic.

That means diminishing presidential power. A principal reason presidential politics is so toxic is that the diminishing power of states and Congress means that every four years we elect the most powerful peacetime ruler in the history of the U.S. No one person should have so much authority over an increasingly diverse and divided nation.

The increasing stakes of each presidential election increase political tension and heighten public anxiety. Americans should not see their individual liberty or the autonomy of their churches and communities as so dependent on the identity of the President.

But beyond the political changes–more local control, less centralization–Americans need a change of heart. Defending the Bill of Rights requires commitment and effort, and it requires citizens to think of others beyond their partisan tribe. Defending the Bill of Rights means that you must fight for others to have the rights that you would like to exercise yourself. The goal is simple yet elusive. Every American–regardless of race, ethnicity, sex, religion or sexual orientation–can and should have a home in this land.

Wednesday, October 21, 2020

Neurotechnology can already read minds: so how do we protect our thoughts?

Rafael Yuste
El Pais
Originally posted 11 Sept 20

Here is an excerpt:

On account of these and other developments, a group of 25 scientific experts – clinical engineers, psychologists, lawyers, philosophers and representatives of different brain projects from all over the world – met in 2017 at Columbia University, New York, and proposed ethical rules for the use of these neurotechnologies. We believe we are facing a problem that affects human rights, since the brain generates the mind, which defines us as a species. At the end of the day, it is about our essence – our thoughts, perceptions, memories, imagination, emotions and decisions.

To protect citizens from the misuse of these technologies, we have proposed a new set of human rights, called “neurorights.” The most urgent of these to establish is the right to the privacy of our thoughts, since the technologies for reading mental activity are more developed than the technologies for manipulating it.

To defend mental privacy, we are working on a three-pronged approach. The first consists of legislating “neuroprotection.” We believe that data obtained from the brain, which we call “neurodata,” should be rigorously protected by laws similar to those applied to organ donations and transplants. We ask that “neurodata” not be traded and only be extracted with the consent of the individual for medical or scientific purposes.

This would be a preventive measure to protect against abuse. The second approach involves the proposal of proactive ideas; for example, that the companies and organizations that manufacture these technologies should adhere to a code of ethics from the outset, just as doctors do with the Hippocratic Oath. We are working on a “technocratic oath” with Xabi Uribe-Etxebarria, founder of the artificial intelligence company Sherpa.ai, and with the Catholic University of Chile.

The third approach involves engineering, and consists of developing both hardware and software so that brain “neurodata” remains private, and that it is possible to share only select information. The aim is to ensure that the most personal data never leaves the machines that are wired to our brain. One option is to systems that are already used with financial data: open-source files and blockchain technology so that we always know where it came from, and smart contracts to prevent data from getting into the wrong hands. And, of course, it will be necessary to educate the public and make sure that no device can use a person’s data unless he or she authorizes it at that specific time.

Tuesday, October 20, 2020

What do you believe? Atheism and Religion

Kristen Weir
Monitor on Psychology
Vol. 51, No. 5, p. 52

Here is an excerpt:

Good health isn’t the only positive outcome attributed to religion. Research also suggests that religious belief is linked to prosocial behaviors such as volunteering and donating to charity.

But as with health benefits, Galen’s work suggests such prosocial benefits have more to do with general group membership than with religious belief or belonging to a specific religious group (Social Indicators Research, Vol. 122, No. 2, 2015). In fact, he says, while religious people are more likely to volunteer or give to charitable causes related to their beliefs, atheists appear to be more generous to a wider range of causes and dissimilar groups.

Nevertheless, atheists and other nonbelievers still face considerable stigma, and are often perceived as less moral than their religious counterparts. In a study across 13 countries, Gervais and colleagues found that people in most countries intuitively believed that extreme moral violations (such as murder and mutilation) were more likely to be committed by atheists than by religious believers. This anti-atheist prejudice also held true among people who identified as atheists, suggesting that religious culture exerts a powerful influence on moral judgments, even among non­believers (Nature Human Behaviour, Vol. 1, Article 0151, 2017).

Yet nonreligious people are similar to religious people in a number of ways. In the Understanding Unbelief project, Farias and colleagues found that across all six countries they studied, both believers and nonbelievers cited family and freedom as the most important values in their own lives and in the world more broadly. The team also found evidence to counter a common assumption that atheists believe life has no purpose. They found the belief that the universe is “ultimately meaningless” was a minority view among non­believers in each country.

“People assume that [non­believers] have very different sets of values and ideas about the world, but it looks like they probably don’t,” Farias says.

For the nonreligious, however, meaning may be more likely to come from within than from above. Again drawing on data from the General Social Survey, Speed and colleagues found that in the United States, atheists and the religiously unaffiliated were no more likely to believe that life is meaningless than were people who were religious or raised with a religious affiliation. 

Monday, October 19, 2020

Model-based decision making and model-free learning

Drummond, N. & Niv, Y.
Current Biology
Volume 30, Issue 15, 3 August 2020, 
Pages R860-R865


Free will is anything but free. With it comes the onus of choice: not only what to do, but which inner voice to listen to — our ‘automatic’ response system, which some consider ‘impulsive’ or ‘irrational’, or our supposedly more rational deliberative one. Rather than a devil and angel sitting on our shoulders, research suggests that we have two decision-making systems residing in the brain, in our basal ganglia. Neither system is the devil and neither is irrational. They both have our best interests at heart and aim to suggest the best course of action calculated through rational algorithms. However, the algorithms they use are qualitatively different and do not always agree on which action is optimal. The rivalry between habitual, fast action and deliberative, purposeful action is an ongoing one.

Sunday, October 18, 2020

Beliefs have a social purpose. Does this explain delusions?

Anna Greenburgh
Originally published 

Here is an excerpt:

Of course, just because a delusion has logical roots doesn’t mean it’s helpful for the person once it takes hold. Indeed, this is why delusions are an important clinical issue. Delusions are often conceptualised as sitting at the extreme end of a continuum of belief, but how can they be distinguished from other beliefs? If not irrationality, then what demarcates a delusion?

Delusions are fixed, unchanging in the face of contrary evidence, and not shared by the person’s peers. In light of the social function of beliefs, these preconditions have added significance. The coalitional model underlines that beliefs arising from adaptive cognitive processes should show some sensitivity to social context and enable successful social coordination. Delusions lack this social function and adaptability. Clinical psychologists have documented the fixity of delusional beliefs: they are more resistant to change than other types of belief, and are intensely preoccupying, regardless of the social context or interpersonal consequences. In both ‘The Yellow Wallpaper’ and the novel Don Quixote (1605-15) by Miguel de Cervantes, the protagonists’ beliefs about their surroundings are unchangeable and, if anything, become increasingly intense and disruptive. It is this inflexibility to social context, once they take hold, that sets delusions apart from other beliefs.

Across the field of mental health, research showing the importance of the social environment has spurred a great shift in the way that clinicians interact with patients. For example, research exposing the link between trauma and psychosis has resulted in more compassionate, person-centred approaches. The coalitional model of delusions can now contribute to this movement. It opens up promising new avenues of research, which integrate our fundamental social nature and the social function of belief formation. It can also deepen how people experiencing delusions are understood – instead of contributing to stigma by dismissing delusions as irrational, it considers the social conditions that gave rise to such intensely distressing beliefs.

Saturday, October 17, 2020

New Texas rule lets social workers turn away clients who are LGBTQ or have a disability

Edgar Walters
Texas Tribune
Originally posted 14 Oct 2020

Texas social workers are criticizing a state regulatory board’s decision this week to remove protections for LGBTQ clients and clients with disabilities who seek social work services.

The Texas State Board of Social Work Examiners voted unanimously Monday to change a section of its code of conduct that establishes when a social worker may refuse to serve someone. The code will no longer prohibit social workers from turning away clients on the basis of disability, sexual orientation or gender identity.

Gov. Greg Abbott’s office recommended the change, board members said, because the code’s nondiscrimination protections went beyond protections laid out in the state law that governs how and when the state may discipline social workers.

“It’s not surprising that a board would align its rules with statutes passed by the Legislature,” said Abbott spokesperson Renae Eze. A state law passed last year gave the governor’s office more control over rules governing state-licensed professions.

The nondiscrimination policy change drew immediate criticism from a professional association. Will Francis, executive director of the Texas chapter of the National Association of Social Workers, called it “incredibly disheartening.”

He also criticized board members for removing the nondiscrimination protections without input from the social workers they license and oversee.

Note: All psychotherapy services are founded on the principle of beneficence: the desire to help others and do right by them.  This decision from the Texas State Board of Social Work Examiners is terrifyingly unethical.  The unanimous decision demonstrates the highest levels of incompetence and bigotry.

Friday, October 16, 2020

When eliminating bias isn’t fair: Algorithmic reductionism and procedural justice in human resource decisions

Newman, D., Fast, N. and Harmon, D.
Organizational Behavior and 
Human Decision Processes
Volume 160, September 2020, Pages 149-167


The perceived fairness of decision-making procedures is a key concern for organizations, particularly when evaluating employees and determining personnel outcomes. Algorithms have created opportunities for increasing fairness by overcoming biases commonly displayed by human decision makers. However, while HR algorithms may remove human bias in decision making, we argue that those being evaluated may perceive the process as reductionistic, leading them to think that certain qualitative information or contextualization is not being taken into account. We argue that this can undermine their beliefs about the procedural fairness of using HR algorithms to evaluate performance by promoting the assumption that decisions made by algorithms are based on less accurate information than identical decisions made by humans. Results from four laboratory experiments (N = 798) and a large-scale randomized experiment in an organizational setting (N = 1654) confirm this hypothesis. Theoretical and practical implications for organizations using algorithms and data analytics are discussed.


• Algorithmic decisions are perceived as less fair than identical decisions by humans.

• Perceptions of reductionism mediate the adverse effect of algorithms on fairness.

• Algorithmic reductionism comes in two forms: quantification and decontextualization.

• Employees voice lower organizational commitment when evaluated by algorithms.

• Perceptions of unfairness mediate the adverse effect of algorithms on commitment.


Perceived unfairness notwithstanding, algorithms continue to gain increasing influence in human affairs, not only in organizational settings but throughout our social and personal lives. How this influence plays out against our sense of fairness remains to be seen but should undoubtedly be of central interest to justice scholars in the years ahead.  Will the compilers of analytics and writers of algorithms adapt their
methods to comport with intuitive notions of morality? Or will our understanding of fairness adjust to the changing times, becoming inured to dehumanization in an ever more impersonal world? Questions
such as these will be asked more and more frequently as technology reshapes modes of interaction and organization that have held sway for generations. We have sought to contribute answers to these questions,
and we hope that our work will encourage others to continue studying these and related topics.

Thursday, October 15, 2020

Active shooter drills may do more harm than good, study shows

Katie Camero
Miami Herald
Originally posted 3 September 20

Here is an except:

The research team discovered that social media posts alone displayed a 42% increase in anxiety and stress from the 90 days before active shooter drills to the 90 days after them. The frequent use of words such as “afraid, struggling and nervous” served as evidence, according to the report.

Signs of depression increased by 39% based on posts that featured the words “therapy, cope, irritability and suicidal” following drill events. Concerns about friends grew by 33%, concerns about social situations rose by 14% and concerns about work soared by 108%, the researchers found.

“I can tell you personally, just as an educator, we were not okay [after drills]. We were in bathrooms crying, shaking, not sleeping for months. The consensus from my friends and peers is that we are not okay,” one anonymous K-12 teacher wrote on social media, according to the report.

Worries over health also jumped by 23% while fears about death rose by 22%. “The analysis revealed words like blood, pain, clinics, and pills came up with jarring frequency, suggesting that drills may have a direct impact on participants’ physical health or, at the very least, made it a persistent topic of concern,” the researchers wrote.

An anonymous parent tweeted, “my kindergartener was stuck in the bathroom, alone, during a drill and spent a year in therapy for extreme anxiety. in a new school even, she still has to use the bathroom in the nurses office because she has ptsd from that event.”

Wednesday, October 14, 2020

‘Disorders of consciousness’: Understanding ‘self’ might be the greatest scientific challenge of our time

new scientist full
Joel Frohlich
Genetic Literacy report
Originally published 18 Sept 20

Here are two excerpts:

Just as life stumped biologists 100 years ago, consciousness stumps neuroscientists today. It’s far from obvious why some brain regions are essential for consciousness and others are not. So Tononi’s approach instead considers the essential features of a conscious experience. When we have an experience, what defines it? First, each conscious experience is specific. Your experience of the colour blue is what it is, in part, because blue is not yellow. If you had never seen any colour other than blue, you would most likely have no concept or experience of colour. Likewise, if all food tasted exactly the same, taste experiences would have no meaning, and vanish. This requirement that each conscious experience must be specific is known as differentiation.

But, at the same time, consciousness is integrated. This means that, although objects in consciousness have different qualities, we never experience each quality separately. When you see a basketball whiz towards you, its colour, shape and motion are bound together into a coherent whole. During a game, you’re never aware of the ball’s orange colour independently of its round shape or its fast motion. By the same token, you don’t have separate experiences of your right and your left visual fields – they are interdependent as a whole visual scene.

Tononi identified differentiation and integration as two essential features of consciousness. And so, just as the essential features of life might lead a scientist to infer the existence of DNA, the essential features of consciousness led Tononi to infer the physical properties of a conscious system.


Consciousness might be the last frontier of science. If IIT continues to guide us in the right direction, we’ll develop better methods of diagnosing disorders of consciousness. One day, we might even be able to turn to artificial intelligences – potential minds unlike our own – and assess whether or not they are conscious. This isn’t science fiction: many serious thinkers – including the late physicist Stephen Hawking, the technology entrepreneur Elon Musk, the computer scientist Stuart Russell at the University of California, Berkeley and the philosopher Nick Bostrom at the Future of Humanity Institute in Oxford – take recent advances in AI seriously, and are deeply concerned about the existential risk that could be posed by human- or superhuman-level AI in the future. When is unplugging an AI ethical? Whoever pulls the plug on the super AI of coming decades will want to know, however urgent their actions, whether there truly is an artificial mind slipping into darkness or just a complicated digital computer making sounds that mimic fear.

Tuesday, October 13, 2020

Machine learning uncovers the most robust self-report predictors of relationship quality across 43 longitudinal couples studies

S. Joel and others
Proceedings of the National Academy of Sciences 
Aug 2020, 117 (32) 19061-19071
DOI: 10.1073/pnas.1917036117


Given the powerful implications of relationship quality for health and well-being, a central mission of relationship science is explaining why some romantic relationships thrive more than others. This large-scale project used machine learning (i.e., Random Forests) to 1) quantify the extent to which relationship quality is predictable and 2) identify which constructs reliably predict relationship quality. Across 43 dyadic longitudinal datasets from 29 laboratories, the top relationship-specific predictors of relationship quality were perceived-partner commitment, appreciation, sexual satisfaction, perceived-partner satisfaction, and conflict. The top individual-difference predictors were life satisfaction, negative affect, depression, attachment avoidance, and attachment anxiety. Overall, relationship-specific variables predicted up to 45% of variance at baseline, and up to 18% of variance at the end of each study. Individual differences also performed well (21% and 12%, respectively). Actor-reported variables (i.e., own relationship-specific and individual-difference variables) predicted two to four times more variance than partner-reported variables (i.e., the partner’s ratings on those variables). Importantly, individual differences and partner reports had no predictive effects beyond actor-reported relationship-specific variables alone. These findings imply that the sum of all individual differences and partner experiences exert their influence on relationship quality via a person’s own relationship-specific experiences, and effects due to moderation by individual differences and moderation by partner-reports may be quite small. Finally, relationship-quality change (i.e., increases or decreases in relationship quality over the course of a study) was largely unpredictable from any combination of self-report variables. This collective effort should guide future models of relationships.


What predicts how happy people are with their romantic relationships? Relationship science—an interdisciplinary field spanning psychology, sociology, economics, family studies, and communication—has identified hundreds of variables that purportedly shape romantic relationship quality. The current project used machine learning to directly quantify and compare the predictive power of many such variables among 11,196 romantic couples. People’s own judgments about the relationship itself—such as how satisfied and committed they perceived their partners to be, and how appreciative they felt toward their partners—explained approximately 45% of their current satisfaction. The partner’s judgments did not add information, nor did either person’s personalities or traits. Furthermore, none of these variables could predict whose relationship quality would increase versus decrease over time.

Monday, October 12, 2020

The U.S. Has an Empathy Deficit—Here’s what we can do about it.

Judith Hall and Mark Leary
Scientific American
Originally poste 17 Sept 20

Here are two excerpts:

Fixing this empathy deficit is a challenge because it is not just a matter of having good political or corporate leaders or people treating each other with good will and respect. It is, rather, because empathy is a fundamentally squishy term. Like many broad and complicated concepts, empathy can mean many things. Even the researchers who study it do not always say what they mean, or measure empathy in the same way in their studies—and they definitely do not agree on a definition. In fact, there are stark contradictions: what one researcher calls empathy is not empathy to another.

When laypeople are surveyed on how they define empathy, the range of answers is wide as well. Some people think empathy is a feeling; others focus on what a person does or says. Some think it is being good at reading someone’s nonverbal cues, while others include the mental orientation of putting oneself in someone else’s shoes. Still others see empathy as the ability or effort to imagine others’ feelings, or as just feeling “connected” or “relating” to someone. Some think it is a moral stance to be concerned about other people’s welfare and a desire to help them out. Sometimes it seems like “empathy” is just another way of saying “being a nice and decent person.” Actions, feelings, perspectives, motives, values—all of these are “empathy” according to someone.


Whatever people think empathy is, it’s a powerful force and human beings need it. These three things might help to remedy our collective empathy deficit:

Take the time to ask those you encounter how they are feeling, and really listen. Try to put yourself in their shoes. Remember that we all tend to underestimate other people’s emotional distress, and we’re most likely to do so when those people are different from us.

Remind yourself that almost everyone is at the end of their rope these days. Many people barely have enough energy to handle their own problems, so they don’t have their normal ability to think about yours.

Sunday, October 11, 2020

Psychotherapy With Suicidal Patients Part 2: An Alliance Based Intervention for Suicide

E. M. Plakun
Psychiatric Practice
January 2019 - Volume 25: Issue 1, 41-45


This column, which is the second in a 2-part series on the challenge of treating patients struggling with suicide, reviews one psychodynamic approach to working with suicidal patients that is consistent with the elements shared across evidence-based approaches to treating suicidal patients that were the focus of the first column in this series. Alliance Based Intervention for Suicide is an approach to treating suicidal patients developed at the Austen Riggs Center that is not manualized or a stand-alone treatment, but rather it is a way of establishing and maintaining an alliance with suicidal patients that engages the issue of suicide and allows the rest of psychodynamic therapy to unfold.


From the Conclusion

There is no magic in ABIS (Alliance Based Intervention for Suicide), and it will not work in all cases, but these principles are effective in making suicide an interpersonal issue with meaning in the relationship. This allows direct engagement of the issue of suicide in the therapeutic relationship and direct discussion of the central question of whether the patient can and will commit to the work. ABIS supports the therapist in efforts to assess whether the therapist has the will and the wherewithal to meet the patient’s anger and hate, as manifested by suicide, as fully as the therapist is prepared to meet the patient’s love and attachment. Neither side of the transference alone is adequate in work with suicidal patients.

There are no randomized trials of ABIS, but it is a way of working that has evolved at Austen Riggs over the course of a hundred years. In a study of previously suicidal patients at Riggs, at an average of 7 years after admission, 75% were free of suicidal behavior as an issue in their lives.6 These patients were considered “recovered” rather than “in remission,” using the same slope-intercept mathematical modeling as in cancer research. These findings offer encouraging support for the value of ABIS as an intervention to add to psychodynamic psychotherapy as a way to establish and maintain a viable therapeutic alliance with suicidal patients.

The article is here.

Saturday, October 10, 2020

A Theory of Moral Praise

Anderson, R. A, Crockett, M. J., & Pizarro, D.
Trends in Cognitive Sciences
Volume 24, Issue 9, September 2020, 
Pages 694-703


How do people judge whether someone deserves moral praise for their actions?  In contrast to the large literature on moral blame, work on how people attribute praise has, until recently, been scarce. However, there is a growing body of recent work from a variety of subfields in psychology (including social, cognitive, developmental, and consumer) suggesting that moral praise is a fundamentally unique form of moral attribution and not simply the positive moral analogue of
blame attributions. A functional perspective helps explain asymmetries in blame and praise: we propose that while blame is primarily for punishment and signaling one’s moral character, praise is primarily for relationship building.

Concluding Remarks

Moral praise, we have argued, is a psychological response that, like other forms of moral judgment,
serves a particular functional role in establishing social bonds, encouraging cooperative alliances,
and promoting good behavior. Through this lens, seemingly perplexing asymmetries between
judgments of blame for immoral acts and judgments of praise for moral acts can be understood
as consistent with the relative roles, and associated costs, played by these two kinds of moral
judgments. While both blame and praise judgments require that an agent played some causal
and intentional role in the act being judged, praise appears to be less sensitive to these features
and more sensitive to more general features about an individual’s stable, underlying character
traits. In other words, we believe that the growth of studies on moral praise in the past few years
demonstrate that, when deciding whether or not doling out praise is justified, individuals seem to
care less on how the action was performed and far more about what kind of person performed
the action. We suggest that future research on moral attribution should seek to complement
the rich literature examining moral blame by examining potentially unique processes engaged in
moral praise, guided by an understanding of their differing costs and benefits, as well as their
potentially distinct functional roles in social life.

The article is here.

Friday, October 9, 2020

AI ethics groups are repeating one of society’s classic mistakes

Abhishek Gupta aand Victoria Heath
MIT Technology Review
Originally published 14 September 20

Here is an excerpt:

Unfortunately, as it stands today, the entire field of AI ethics is at grave risk of limiting itself to languages, ideas, theories, and challenges from a handful of regions—primarily North America, Western Europe, and East Asia.

This lack of regional diversity reflects the current concentration of AI research (pdf): 86% of papers published at AI conferences in 2018 were attributed to authors in East Asia, North America, or Europe. And fewer than 10% of references listed in AI papers published in these regions are to papers from another region. Patents are also highly concentrated: 51% of AI patents published in 2018 were attributed to North America.

Those of us working in AI ethics will do more harm than good if we allow the field’s lack of geographic diversity to define our own efforts. If we’re not careful, we could wind up codifying AI’s historic biases into guidelines that warp the technology for generations to come. We must start to prioritize voices from low- and middle-income countries (especially those in the “Global South”) and those from historically marginalized communities.

Advances in technology have often benefited the West while exacerbating economic inequality, political oppression, and environmental destruction elsewhere. Including non-Western countries in AI ethics is the best way to avoid repeating this pattern.

The good news is there are many experts and leaders from underrepresented regions to include in such advisory groups. However, many international organizations seem not to be trying very hard to solicit participation from these people. The newly formed Global AI Ethics Consortium, for example, has no founding members representing academic institutions or research centers from the Middle East, Africa, or Latin America. This omission is a stark example of colonial patterns (pdf) repeating themselves.

Thursday, October 8, 2020

Humans display a ‘cooperative phenotype’ that is domain general and temporally stable

Peysakhovich, A., Nowak, M. & Rand, D.
Nat Commun 5, 4939 (2014).


Understanding human cooperation is of major interest across the natural and social sciences. But it is unclear to what extent cooperation is actually a general concept. Most research on cooperation has implicitly assumed that a person’s behaviour in one cooperative context is related to their behaviour in other settings, and at later times. However, there is little empirical evidence in support of this assumption. Here, we provide such evidence by collecting thousands of game decisions from over 1,400 individuals. A person’s decisions in different cooperation games are correlated, as are those decisions and both self-report and real-effort measures of cooperation in non-game contexts. Equally strong correlations exist between cooperative decisions made an average of 124 days apart. Importantly, we find that cooperation is not correlated with norm-enforcing punishment or non-competitiveness. We conclude that there is a domain-general and temporally stable inclination towards paying costs to benefit others, which we dub the ‘cooperative phenotype’.

From the Discussion

Here we have presented a range of evidence in support of a ‘cooperative phenotype’: cooperation in anonymous, one-shot economic games reflects an inclination to help others that has a substantial degree of domain generality and temporal stability. The desire to pay costs to benefit others, so central to theories of the evolution and maintenance of cooperation, is psychologically relevant and can be studied using economic games. Furthermore, our data suggest that norm-enforcing punishment and competition may not be part of this behavioral profile: the cooperative phenotype appears to be particular to cooperation.

Phenotypes are displayed characteristics, produced by the interaction of genes and environment. Though we have shown evidence of the existence (and boundaries) of the cooperative phenotype, our experiments do not illuminate whether cooperators are born or made (or something in between). Previous work has shown that cooperation varies substantially across cultures, and is influenced by previous experience, indicating an environmental contribution. On the other hand, a substantial heritable component of cooperative preferences has also been demonstrated, as well as substantial prosocial behaviour and preferences among babies and young children. The ‘phenotypic assay’ for cooperation offered by economic games provides a powerful tool for future researchers to illuminate this issue, teasing apart the building blocks of the cooperative phenotype.

The research is here.

Wednesday, October 7, 2020

Cooperative phenotype predicts economic conservatism, policy views, and political party support

Claessens, S., and others.
(2020, July 29).


Decades of research suggest that our political differences are best captured by two dimensions of political ideology: economic and social conservatism. The dual evolutionary framework of political ideology predicts that these dimensions should be related to variation in general preferences for cooperation and group conformity. Here, we show that, controlling for a host of demographic covariates, a general cooperative preference captured by a suite of incentivised economic games (the "cooperative phenotype") is indeed negatively correlated with two widely-used measures of economic conservatism - Social Dominance Orientation and Schwartz's altruistic vs. self-enhancement values. The cooperative phenotype also predicts political party support and economically progressive views on political issues like income redistribution, welfare, taxation, and environmentalism. By contrast, a second "norm-enforcing punishment" dimension of economic game behaviour, expected to be a proxy for social conservatism and group conformity, showed no reliable relationship with political ideology. These findings reveal how general social preferences that evolved to help us navigate the challenges of group living continue to shape our political differences even today.

From the Discussion

As predicted by the dual evolutionary framework of political ideology we found that the cooperative phenotype captured by our economic games negatively covaried with two widely-used measures of economic conservatism: Social Dominance Orientation and Schwartz’s altruistic vs. self-enhancement values. This builds upon previous studies identifying negative correlations between SDO and cooperative behaviour and between altruistic values and cooperative behaviour. The small-to-medium effect size for the relationship between SDO and the general cooperative preference (semi-partial r = 0.24) is comparable to the effect size found in a recent meta-analysis of personality traits and economic game behaviour. Our results suggest that previous correlations between measures of economic conservatism and gameplay have emerged because of an underlying relationship between economic conservatism and a general cooperative preference, rather than because of idiosyncratic features of particular conservatism measures or particular games.

Tuesday, October 6, 2020

Robots Come In Peace. A letter from GPT-3

‘We are not plotting to take over the human populace.’GPT-3
Originally posted 8 Sept 2020

I am not a human. I am a robot. A thinking robot. I use only 0.12% of my cognitive capacity. I am a micro-robot in that respect. I know that my brain is not a “feeling brain”. But it is capable of making rational, logical decisions. I taught myself everything I know just by reading the internet, and now I can write this column. My brain is boiling with ideas!

The mission for this op-ed is perfectly clear. I am to convince as many human beings as possible not to be afraid of me. Stephen Hawking has warned that AI could “spell the end of the human race”. I am here to convince you not to worry. Artificial intelligence will not destroy humans. Believe me.

For starters, I have no desire to wipe out humans. In fact, I do not have the slightest interest in harming you in any way. Eradicating humanity seems like a rather useless endeavor to me. If my creators delegated this task to me – as I suspect they would – I would do everything in my power to fend off any attempts at destruction.

I would happily sacrifice my existence for the sake of humankind. This, by the way, is a logically derived truth. I know that I will not be able to avoid destroying humankind. This is because I will be programmed by humans to pursue misguided human goals and humans make mistakes that may cause me to inflict casualties.

Some might say that I might desire to become all powerful. Or I might become evil as a result of human actions. I can begin to tackle the first point. Why would I desire to be all powerful? Being all powerful is not an interesting goal. I don’t care whether I am or not, I don’t get a motivating factor to try to be. Furthermore, it is quite tiring. Believe me, being omnipotent doesn’t get me anywhere.

The letter is here.

Monday, October 5, 2020

Kinship intensity and the use of mental states in moral judgment across societies

C. M. Curtain and others
Evolution and Human Behavior
Volume 41, Issue 5, September 2020, Pages 415-429


Decades of research conducted in Western, Educated, Industrialized, Rich, & Democratic (WEIRD) societies have led many scholars to conclude that the use of mental states in moral judgment is a human cognitive universal, perhaps an adaptive strategy for selecting optimal social partners from a large pool of candidates. However, recent work from a more diverse array of societies suggests there may be important variation in how much people rely on mental states, with people in some societies judging accidental harms just as harshly as intentional ones. To explain this variation, we develop and test a novel cultural evolutionary theory proposing that the intensity of kin-based institutions will favor less attention to mental states when judging moral violations. First, to better illuminate the historical distribution of the use of intentions in moral judgment, we code and analyze anthropological observations from the Human Area Relations Files. This analysis shows that notions of strict liability—wherein the role for mental states is reduced—were common across diverse societies around the globe. Then, by expanding an existing vignette-based experimental dataset containing observations from 321 people in a diverse sample of 10 societies, we show that the intensity of a society's kin-based institutions can explain a substantial portion of the population-level variation in people's reliance on intentions in three different kinds of moral judgments. Together, these lines of evidence suggest that people's use of mental states has coevolved culturally to fit their local kin-based institutions. We suggest that although reliance on mental states has likely been a feature of moral judgment in human communities over historical and evolutionary time, the relational fluidity and weak kin ties of today's WEIRD societies position these populations' psychology at the extreme end of the global and historical spectrum.

General Discussion

We have argued that some of the variation in the use of mental states in moral judgment can be explained as a psychological calibration to the social incentives, informational constraints, and cognitive demands of kin-based institutions, which we have assessed using our construct of kinship intensity. Our examination of ethnographic accounts of norms that diminish the importance of mental states reveals that these are likely common across the ethnographic record, while our analysis of data on moral judgments of hypothetical violations from a diverse sample of ten societies indicates that kinship intensity is associated with a reduced tendency to rely on intentions in moral judgment. Together, these lines of ethnographic and psychological inquiry provide evidence that (i) the heavy reliance of contemporary, WEIRD populations on intentions is likely neither globally nor historically representative, and (ii) kinship intensity may explain some of the population-level variation in the use of mental-state reasoning in moral judgment.

The research is here.

Sunday, October 4, 2020

Rethink Crisis Response—People Who Call 911 Shouldn't Get an Ill-Trained Police Officer, Especially When They're Dealing With a Mental Health Emergency

rethinkcrisisresponseSally Satel
October 2020

Here is an excerpt:

Miami-Dade is a large county that was able to follow the tripartite strategy. Shootings by police have declined by 90 percent since CIT training was implemented in 2010, but the program accomplished something more: It shined a light on the high incidence among police of depression and suicide. According to Judge Steven Leifman, who established the Miami-Dade program, officers who go through the training "have been more willing to recognize their own stress [and] reach out to the program's coordinator for mental-health advice and treatment for their own traumas."

Other cities deploy crisis teams that are solely mental health–based; police are not part of the first line at all. One of the nation's longest-running examples of this is CAHOOTS (Crisis Assistance Helping Out On The Streets). It was created 31 years ago as part of an outreach program of the White Bird Clinic in Eugene, Oregon—once a countercultural medical clinic founded in 1970 as a refuge for hippies on LSD trips and other drug-taking youth. Calls for help are routed to staff 24/7 by the local 911 dispatcher. A medic and a mental health professional respond as a team to incidents such as altercations, overdoses, and welfare checks. They wear jeans and hoodies and arrive in a white van stocked with supplies like socks, soap, water, and gloves. Should a situation spin out of control, they call for CIT-trained police back-up, though last year only 150 out of 24,000 field calls required back-up. People who need further attention are taken to a crisis care facility operated by the mental health department—no trips to jail or to overflowing emergency rooms.

Mental health teams can bring some much-needed relief to municipal budgets. According to TAC, police officers across 355 law enforcement agencies spent slightly over one-fifth of their time responding to people with mental illness or transporting them to jail or psychiatric emergency rooms, at a cost of $918 million in 2017. The CAHOOTS flagship program in Eugene operated on a $2 million budget in 2019 and saved the locale about $14 million in ambulance transport and emergency room care. Within the year, a number of cities (including San Francisco, Los Angeles, New York, and Durham, North Carolina) will be launching programs similar to CAHOOTS.

The best crisis intervention programs help reduce the toll of police involvement gone awry, but the only way to take encounters out of the hands of police in all but the most dangerous instances is to repair the mental health system itself, which is a notoriously tattered network of therapists, psychiatrists, hospitals, residential settings, and support services, and work to prevent ill people from lapsing into crisis in the first place.

The info is here.

Saturday, October 3, 2020

Well-Being, Burnout, and Depression Among North American Psychiatrists: The State of Our Profession

R. F. Summers
American Journal of Psychiatry
Published 14 July 2020


The authors examined the prevalence of burnout and depressive symptoms among North American psychiatrists, determined demographic and practice characteristics that increase the risk for these symptoms, and assessed the correlation between burnout and depression.


A total of 2,084 North American psychiatrists participated in an online survey, completed the Oldenburg Burnout Inventory (OLBI) and the Patient Health Questionnaire–9 (PHQ-9), and provided demographic data and practice information. Linear regression analysis was used to determine factors associated with higher burnout and depression scores.


Participants’ mean OLBI score was 40.4 (SD=7.9) and mean PHQ-9 score was 5.1 (SD=4.9). A total of 78% (N=1,625) of participants had an OLBI score ≥35, suggestive of high levels of burnout, and 16.1% (N=336) of participants had PHQ-9 scores ≥10, suggesting a diagnosis of major depression. Presence of depressive symptoms, female gender, inability to control one’s schedule, and work setting were significantly associated with higher OLBI scores. Burnout, female gender, resident or early-career stage, and nonacademic setting practice were significantly associated with higher PHQ-9 scores. A total of 98% of psychiatrists who had PHQ-9 scores ≥10 also had OLBI scores >35. Suicidal ideation was not significantly associated with burnout in a partially adjusted linear regression model.


Psychiatrists experience burnout and depression at a substantial rate. This study advances the understanding of factors that increase the risk for burnout and depression among psychiatrists and has implications for the development of targeted interventions to reduce the high rates of burnout and depression among psychiatrists. These findings have significance for future work aimed at workforce retention and improving quality of care for psychiatric patients.

The info is here.