Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Bias. Show all posts
Showing posts with label Bias. Show all posts

Thursday, September 6, 2018

Building An Ethics-First Employee Culture Is Crucial For All Leaders

Patrick Quinlan
Forbes.com
Originally posted July 25, 2018

Here is an excerpt:

These issues — harassment, bias, sexism — are nothing new in the tech sector but, now more than ever before, employees are taking matters into their own hands to make sure ethical business practices and values are upheld, even when leadership fails to do so.

In the last few years, we have seen the entire employer-employee paradigm shift, from offices embracing open floor plans to leaders encouraging team bonding activities. Underlying this massive business transformation is a stronger emphasis on the people who make up organizations, their values and their opinions. What we’ve borne witness to is the rise of empowered employees.

For leaders, shifting gears to focus on people first may feel scary. No one wants their employees to point out something is wrong at the company. But if instilling core ethical values is truly your objective, openness to criticism is part of the job description. Your commitment to promoting a safe, empowering space is not only beneficial for your employees — it ultimately strengthens your company’s culture, values, ethics — and therefore your company’s success long-term.

When leaders lead proactively, encouraging employees to come forward with an opinion or idea, employees feel safe, heard and ready to improve their company culture from the inside out. Whether you’re running a technology giant or a small startup, it’s no overnight shift. But you can start small by asking questions, taking feedback seriously and celebrating your team’s autonomy.

The information is here.

Thursday, August 16, 2018

Peer Review is Not Scientific

E Price
medium.com
Originally published June 18, 2018

Here are two excerpts:

The first thing I want all lovers of science to know is this: peer-reviewers are not paid. When you are contacted by a journal editor and asked to conduct a review, there is no discussion of payment, because no payment is available. Ever. Furthermore, peer reviewing is not associated in any direct way with the actual job of being a professor or researcher. The person asking you to conduct a peer review is not your supervisor or the chair of your department, in nearly any circumstance. Your employer does not keep track of how many peer reviews you conduct and reward you appropriately.

Instead, you’re asked by journal editors, via email, on a voluntary basis. And it’s up to you, as a busy faculty member, graduate student, post-doc, or adjunct, to decide whether to say yes or not.

The process is typically anonymized, and tends to be relatively thankless — no one except the editor who has asked you to conduct the review will know that you were involved in the process. There is no quota of reviews a faculty member is expected to provide. Providing a review cannot really be placed on your resume or CV in any meaningful way.

(cut)

The level of scrutiny that an article is subjected to all comes down to chance. If you’re assigned a reviewer who created a theory that opposes your own theory, your work is likely to be picked apart. The reviewer will look very closely for flaws and take issue with everything that they can. This is not inherently a bad thing — research should be closely reviewed — but it’s not unbiased either.

The information is here.

Saturday, July 21, 2018

Bias detectives: the researchers striving to make algorithms fair

Rachel Courtland
Nature.com
Originally posted

Here is an excerpt:

“What concerns me most is the idea that we’re coming up with systems that are supposed to ameliorate problems [but] that might end up exacerbating them,” says Kate Crawford, co-founder of the AI Now Institute, a research centre at New York University that studies the social implications of artificial intelligence.

With Crawford and others waving red flags, governments are trying to make software more accountable. Last December, the New York City Council passed a bill to set up a task force that will recommend how to publicly share information about algorithms and investigate them for bias. This year, France’s president, Emmanuel Macron, has said that the country will make all algorithms used by its government open. And in guidance issued this month, the UK government called for those working with data in the public sector to be transparent and accountable. Europe’s General Data Protection Regulation (GDPR), which came into force at the end of May, is also expected to promote algorithmic accountability.

In the midst of such activity, scientists are confronting complex questions about what it means to make an algorithm fair. Researchers such as Vaithianathan, who work with public agencies to try to build responsible and effective software, must grapple with how automated tools might introduce bias or entrench existing inequity — especially if they are being inserted into an already discriminatory social system.

The information is here.

Monday, June 11, 2018

Discerning bias in forensic psychological reports in insanity cases

Tess M. S. Neal
Behavioral Sciences & the Law, (2018).

Abstract

This project began as an attempt to develop systematic, measurable indicators of bias in written forensic mental health evaluations focused on the issue of insanity. Although forensic clinicians observed in this study did vary systematically in their report‐writing behaviors on several of the indicators of interest, the data are most useful in demonstrating how and why bias is hard to ferret out. Naturalistic data were used in this project (i.e., 122 real forensic insanity reports), which in some ways is a strength. However, given the nature of bias and the problem of inferring whether a particular judgment is biased, naturalistic data also made arriving at conclusions about bias difficult. This paper describes the nature of bias – including why it is a special problem in insanity evaluations – and why it is hard to study and document. It details the efforts made in an attempt to find systematic indicators of potential bias, and how this effort was successful in part, but also how and why it failed. The lessons these efforts yield for future research are described. We close with a discussion of the limitations of this study and future directions for work in this area.

The research is here.

Thursday, March 1, 2018

Concern for Others Leads to Vicarious Optimism

Andreas Kappes, Nadira S. Faber, Guy Kahane, Julian Savulescu, Molly J. Crockett
Psychological Science 
First Published January 30, 2018

Abstract

An optimistic learning bias leads people to update their beliefs in response to better-than-expected good news but neglect worse-than-expected bad news. Because evidence suggests that this bias arises from self-concern, we hypothesized that a similar bias may affect beliefs about other people’s futures, to the extent that people care about others. Here, we demonstrated the phenomenon of vicarious optimism and showed that it arises from concern for others. Participants predicted the likelihood of unpleasant future events that could happen to either themselves or others. In addition to showing an optimistic learning bias for events affecting themselves, people showed vicarious optimism when learning about events affecting friends and strangers. Vicarious optimism for strangers correlated with generosity toward strangers, and experimentally increasing concern for strangers amplified vicarious optimism for them. These findings suggest that concern for others can bias beliefs about their future welfare and that optimism in learning is not restricted to oneself.

From the Discussion section

Optimism is a self-centered phenomenon in which people underestimate the likelihood of negative future events for themselves compared with others (Weinstein, 1980). Usually, the “other” is defined as a group of average others—an anonymous mass. When past studies asked participants to estimate the likelihood of an event happening to either themselves or the average population, participants did not show a learning bias for the average population (Garrett & Sharot, 2014). These findings are unsurprising given that people typically feel little concern for anonymous groups or anonymous individual strangers (Kogut & Ritov, 2005; Loewenstein et al., 2005). Yet people do care about identifiable others, and we accordingly found that people exhibit an optimistic learning bias for identifiable strangers and, even more markedly, for friends. Our research thereby suggests that optimism in learning is not restricted to oneself. We see not only our own lives through rose-tinted glasses but also the lives of those we care about.

The research is here.

Monday, February 26, 2018

How Doctors Deal With Racist Patients

Sumathi Reddy
The Wall Street Journal
Originally published January 22, 2018

Her is an excerpt:

Patient discrimination against physicians and other health-care providers is an oft-ignored topic in a high-stress job where care always comes first. Experts say patients request another physician based on race, religion, gender, age and sexual orientation.

No government entity keeps track of such incidents. Neither do most hospitals. But more trainees and physicians are coming forward with stories and more hospitals and academic institutions are trying to address the issue with new guidelines and policies.

The examples span race and religion. A Korean-American doctor’s tweet about white nationalists refusing treatment in the emergency room went viral in August.

A trauma surgeon at a hospital in Charlotte, N.C., published a piece on KevinMD, a website for physicians, last year detailing his own experiences with discrimination given his Middle Eastern heritage.

Penn State College of Medicine adopted language into its patient rights policy in May that says patient requests for providers based on gender, race, ethnicity or sexual orientation won’t be honored. It adds that some requests based on gender will be evaluated on a case-by-case basis.

The article is here.

Friday, January 26, 2018

Power Causes Brain Damage

Jerry Useem
The Atlantic
Originally published July 2017

Here is an excerpt:

This is a depressing finding. Knowledge is supposed to be power. But what good is knowing that power deprives you of knowledge?

The sunniest possible spin, it seems, is that these changes are only sometimes harmful. Power, the research says, primes our brain to screen out peripheral information. In most situations, this provides a helpful efficiency boost. In social ones, it has the unfortunate side effect of making us more obtuse. Even that is not necessarily bad for the prospects of the powerful, or the groups they lead. As Susan Fiske, a Princeton psychology professor, has persuasively argued, power lessens the need for a nuanced read of people, since it gives us command of resources we once had to cajole from others. But of course, in a modern organization, the maintenance of that command relies on some level of organizational support. And the sheer number of examples of executive hubris that bristle from the headlines suggests that many leaders cross the line into counterproductive folly.

Less able to make out people’s individuating traits, they rely more heavily on stereotype. And the less they’re able to see, other research suggests, the more they rely on a personal “vision” for navigation. John Stumpf saw a Wells Fargo where every customer had eight separate accounts. (As he’d often noted to employees, eight rhymes with great.) “Cross-selling,” he told Congress, “is shorthand for deepening relationships.”

The article is here.

Saturday, May 27, 2017

Why Do So Many Incompetent Men Become Leaders?

Tomas Chamorro-Premuzic
Harvard Business Review
Originally published August 22, 2013

There are three popular explanations for the clear under-representation of women in management, namely: (1) they are not capable; (2) they are not interested; (3) they are both interested and capable but unable to break the glass-ceiling: an invisible career barrier, based on prejudiced stereotypes, that prevents women from accessing the ranks of power. Conservatives and chauvinists tend to endorse the first; liberals and feminists prefer the third; and those somewhere in the middle are usually drawn to the second. But what if they all missed the big picture?

In my view, the main reason for the uneven management sex ratio is our inability to discern between confidence and competence. That is, because we (people in general) commonly misinterpret displays of confidence as a sign of competence, we are fooled into believing that men are better leaders than women. In other words, when it comes to leadership, the only advantage that men have over women (e.g., from Argentina to Norway and the USA to Japan) is the fact that manifestations of hubris — often masked as charisma or charm — are commonly mistaken for leadership potential, and that these occur much more frequently in men than in women.

The article is here.

Saturday, May 20, 2017

Conflict of Interest and the Integrity of the Medical Profession

Allen S. Lichter
JAMA. 2017;317(17):1725-1726.

Physicians have a moral responsibility to patients; they are trusted to place the needs and interests of patients ahead of their own, free of unwarranted outside influences on their decisions. Those who have relationships that might be seen to influence their decisions and behaviors that may affect fulfilling their responsibilities to patients must be fully transparent about them. Two types of interactions and activities involving physicians are most relevant: (1) commercial or research relationships between a physician expert and a health care company designed to advance an idea or promote a product, and (2) various gifts, sponsored meals, and educational offerings that come directly or indirectly to physicians from these companies.

Whether these and other ties to industry are important is not a new issue for medicine. Considerations regarding the potential influence of commercial ties date back at least to the 1950s and 1960s. In 1991, Relman reminded physicians that they have “a unique opportunity to assume personal responsibility for important decisions that are not influenced by or subordinated to the purposes of third parties.” However, examples of potential subordination are easily found. There are reports of physicians who are paid handsomely to promote a drug or device, essentially serving as a company spokesperson; of investigators who have ownership in the company that stands to gain if the clinical trial is successful; and of clinical guideline panels that are dominated by experts with financial ties to companies whose products are relevant to the disease or condition at hand.

The article is here.

Tuesday, May 16, 2017

Talking in Euphemisms Can Chip Away at Your Sense of Morality

Laura Niemi, Alek Chakroff, and Liane Young
The Science of Us
Originally published April 7, 2017

Here is an excerpt:

Taken together, the results suggest that unethical behavior becomes easier when we perceive our own actions in indirect terms, which makes things that we would otherwise balk at seem a bit more palatable. In other words, deploying indirect speech doesn’t just help us evade blame from others — it also helps us to convince ourselves that unethical acts aren’t so bad after all.

That’s not to say that this is a conscious process. A speaker who shrouds his harmful intentions in indirect speech may understand that this will help him hold on to his standing in the public eye, or maintain his reputation among those closest to him — a useful tactic when those intentions are likely to be condemned or fall outside the bounds of socially acceptable behavior. But that same speaker may be unaware of just how much their indirect speech is easing their own psyche, too.

The article is here.

Monday, May 15, 2017

Cassandra’s Regret: The Psychology of Not Wanting to Know

Gigerenzer, Gerd; Garcia-Retamero, Rocio
Psychological Review, Vol 124(2), Mar 2017, 179-196.

Abstract

Ignorance is generally pictured as an unwanted state of mind, and the act of willful ignorance may raise eyebrows. Yet people do not always want to know, demonstrating a lack of curiosity at odds with theories postulating a general need for certainty, ambiguity aversion, or the Bayesian principle of total evidence. We propose a regret theory of deliberate ignorance that covers both negative feelings that may arise from foreknowledge of negative events, such as death and divorce, and positive feelings of surprise and suspense that may arise from foreknowledge of positive events, such as knowing the sex of an unborn child. We conduct the first representative nationwide studies to estimate the prevalence and predictability of deliberate ignorance for a sample of 10 events. Its prevalence is high: Between 85% and 90% of people would not want to know about upcoming negative events, and 40% to 70% prefer to remain ignorant of positive events. Only 1% of participants consistently wanted to know. We also deduce and test several predictions from the regret theory: Individuals who prefer to remain ignorant are more risk averse and more frequently buy life and legal insurance. The theory also implies the time-to-event hypothesis, which states that for the regret-prone, deliberate ignorance is more likely the nearer the event approaches. We cross-validate these findings using 2 representative national quota samples in 2 European countries. In sum, we show that deliberate ignorance exists, is related to risk aversion, and can be explained as avoiding anticipatory regret.



The article is here.

Saturday, March 4, 2017

How ‘Intellectual Humility’ Can Make You a Better Person

Cindy Lamothe
The Science of Us
Originally posted February 3, 2017

There’s a well-known Indian parable about six blind men who argue at length about what an elephant feels like. Each has a different idea, and each holds fast to his own view. “It’s like a rope,” says the man who touched the tail. “Oh no, it’s more like the solid branch of a tree,” contends the one who touched the trunk. And so on and so forth, and round and round they go.

The moral of the story: We all have a tendency to overestimate how much we know — which, in turn, means that we often cling stubbornly to our beliefs while tuning out opinions different from our own. We generally believe we’re better or more correct than everyone else, or at least better than most people — a psychological quirk that’s as true for politics and religion as it is for things like fashion and lifestyles. And in a time when it seems like we’re all more convinced than ever of our own rightness, social scientists have begun to look more closely at an antidote: a concept called intellectual humility.

Unlike general humility — which is defined by traits like sincerity, honesty, and unselfishness — intellectual humility has to do with understanding the limits of one’s knowledge. It’s a state of openness to new ideas, a willingness to be receptive to new sources of evidence, and it comes with significant benefits: People with intellectual humility are both better learners and better able to engage in civil discourse. Google’s VP in charge of hiring, Laszlo Bock, has claimed it as one of the top qualities he looks for in a candidate: Without intellectual humility, he has said, “you are unable to learn.”

The article is here.

Thursday, March 2, 2017

Jail cells await mentally ill in Rapid City

Mike Anderson
Rapid City Journal
Originally published February 7, 2017

Mentally ill people in Rapid City who have committed no crimes will probably end up in jail because of a major policy change recently announced by Rapid City Regional Hospital.

The hospital is no longer taking in certain types of mentally ill patients and will instead contact the Pennington County Sheriff’s Office to take them into custody.

The move has prompted criticism from local law enforcement officials, who say the decision was made suddenly and without their input.

“In my view, this is the biggest step backward our community has experienced in terms of health care for mental health patients,” said Rapid City police Chief Karl Jegeris. “And though it’s legally permissible by statute to put someone in an incarceration setting, it doesn’t mean that it’s the right thing to do.”

This is the second major policy change to come out of Regional in recent days that places limits on the type of mental health care the hospital will provide.

The article is here.

Wednesday, February 8, 2017

Medical culture encourages doctors to avoid admitting mistakes

By Lawrence Schlachter
STAT News
Originally published on January 13, 2017

Here are two excerpts:

In reality, the factor that most influences doctors to hide or disclose medical errors should be clear to anyone who has spent much time in the profession: The culture of medicine frowns on admitting mistakes, usually on the pretense of fear of malpractice lawsuits.

But what’s really at risk are doctors’ egos and the preservation of a system that lets physicians avoid accountability by ignoring problems or shifting blame to “the system” or any culprit other than themselves.

(cut)

What is a patient to do in this environment? The first thing is to be aware of your own predisposition to take everything your doctor says at face value. Listen closely and you may hear cause for more intense questioning.

You will likely never hear the terms negligence, error, mistake, or injury in a hospital. Instead, these harsh but truthful words and phrases are replaced with softer ones like accident, adverse event, or unfortunate outcome. If you hear any of these euphemisms, ask more questions or seek another opinion from a different doctor, preferably at a different facility.

Most doctors would never tell a flagrant lie. But in my experience as a neurosurgeon and as an attorney, too many of them resort to half-truths and glaring omissions when it comes to errors. Beware of passive language like “the patient experienced bleeding” rather than “I made a bad cut”; attributing an error to random chance or a nameless, faceless system; or trivialization of the consequences of the error by claiming something was “a blessing in disguise.”

The article is here.

Tuesday, November 22, 2016

When Disagreement Gets Ugly: Perceptions of Bias and the Escalation of Conflict

Kathleen A. Kennedy and Emily Pronin
Pers Soc Psychol Bull 2008 34: 833

Abstract

It is almost a truism that disagreement produces conflict. This article suggests that perceptions of bias can drive this relationship. First, these studies show that people perceive those who disagree with them as biased. Second, they show that the conflict-escalating approaches that people take toward those who disagree with them are mediated by people's tendency to perceive those who disagree with them as biased. Third, these studies manipulate the mediator and show that experimental manipulations that prompt people to perceive adversaries as biased lead them to respond more conflictually—and that such responding causes those who engage in it to be viewed as more biased and less worthy of cooperative gestures. In summary, this article provides evidence for a “bias-perception conflict spiral,” whereby people who disagree perceive each other as biased, and those perceptions in turn lead them to take conflict-escalating actions against each other (which in turn engender further perceptions of bias, continuing the spiral).

The article is here.

For those who do marital counseling or work in any adversarial system.

Tuesday, November 8, 2016

The Illusion of Moral Superiority

Ben M. Tappin and Ryan T. McKay
Social Psychological and Personality Science
2016, 1-9

Abstract

Most people strongly believe they are just, virtuous, and moral; yet regard the average person as distinctly less so. This invites accusations of irrationality in moral judgment and perception—but direct evidence of irrationality is absent. Here, we quantify this irrationality and compare it against the irrationality in other domains of positive self-evaluation. Participants (N ¼ 270) judged themselves and the average person on traits reflecting the core dimensions of social perception: morality, agency, and sociability.  Adapting new methods, we reveal that virtually all individuals irrationally inflated their moral qualities, and the absolute and relative magnitude of this irrationality was greater than that in the other domains of positive self-evaluation. Inconsistent with prevailing theories of overly positive self-belief, irrational moral superiority was not associated with self-esteem. Taken together, these findings suggest that moral superiority is a uniquely strong and prevalent form of ‘‘positive illusion,’’ but the underlying function remains unknown.

The article is here.

Sunday, November 6, 2016

The Psychology of Disproportionate Punishment

Daniel Yudkin
Scientific American
Originally published October 18, 2016

Here is an excerpt:

These studies suggest that certain features of the human mind are prone to “intergroup bias” in punishment. While our slow, thoughtful deliberative side may desire to maintain strong standards of fairness and equality, our more basic, reflexive side may be prone to hostility and aggression to anyone deemed an outsider.

Indeed, this is consistent with what we know about the evolutionary heritage of our species, which spent thousands of years in tightly knit tribal groups competing for scarce resources on the African savannah. Intergroup bias may be tightly woven up in the fabric of everyone’s DNA, ready to emerge under conditions of hurry or stress.

But the picture of human relationships is not all bleak. Indeed, another line of research in which I am involved, led by Avital Mentovich, sheds light on the ways we might transcend the biases that lurk beneath the surface of the psyche.

The article is here.

Wednesday, October 26, 2016

7 Ways We Know Systemic Racism Is Real

benjerry.com

Here is an excerpt:

Racism at Every Level of Society

Systemic racism is about the way racism is built right into every level of our society. Many people point to what they see as less in-your-face prejudice and bias these days, compared to decades past, but as Archbishop Desmond Tutu said, “If you are neutral in situations of injustice, you have chosen the side of the oppressor. If an elephant has its foot on the tail of a mouse and you say that you are neutral, the mouse will not appreciate your neutrality.”

While fewer people may consider themselves racist, racism itself persists in our schools, offices, court system, police departments, and elsewhere. Think about it: when white people occupy most positions of decision-making power, people of color have a difficult time getting a fair shake, let alone getting ahead. Bottom line: we have a lot of work to do.

The blog post is here.

Friday, October 7, 2016

The Difference Between Rationality and Intelligence

By David Hambrick and Alexander Burgoyne
The New York Times
Originally published September 16, 2016

Here is an excerpt:

Professor Morewedge and colleagues found that the computer training led to statistically large and enduring decreases in decision-making bias. In other words, the subjects were considerably less biased after training, even after two months. The decreases were larger for the subjects who received the computer training than for those who received the video training (though decreases were also sizable for the latter group). While there is scant evidence that any sort of “brain training” has any real-world impact on intelligence, it may well be possible to train people to be more rational in their decision making.

The article is here.

Tuesday, October 4, 2016

Whatever you think, you don’t necessarily know your own mind

Keith Frankish
aeon.co
Originally published May 27, 2016

Do  you think racial stereotypes are false? Are you sure? I’m not asking if you’re sure whether or not the stereotypes are false, but if you’re sure whether or not you think that they are. That might seem like a strange question. We all know what we think, don’t we?

Most philosophers of mind would agree, holding that we have privileged access to our own thoughts, which is largely immune from error. Some argue that we have a faculty of ‘inner sense’, which monitors the mind just as the outer senses monitor the world. There have been exceptions, however. The mid-20th-century behaviourist philosopher Gilbert Ryle held that we learn about our own minds, not by inner sense, but by observing our own behaviour, and that friends might know our minds better than we do. (Hence the joke: two behaviourists have just had sex and one turns to the other and says: ‘That was great for you, darling. How was it for me?’) And the contemporary philosopher Peter Carruthers proposes a similar view (though for different reasons), arguing that our beliefs about our own thoughts and decisions are the product of self-interpretation and are often mistaken.