Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Discrimination. Show all posts
Showing posts with label Discrimination. Show all posts

Wednesday, August 28, 2019

Profit Versus Prejudice: Harnessing Self-Interest to Reduce In-Group Bias

Stagnaro, M. N., Dunham, Y., & Rand, D. G. (2018).
Social Psychological and Personality Science, 9(1), 50–58.
https://doi.org/10.1177/1948550617699254

Abstract

We examine the possibility that self-interest, typically thought to undermine social welfare, might reduce in-group bias. We compared the dictator game (DG), where participants unilaterally divide money between themselves and a recipient, and the ultimatum game (UG), where the recipient can reject these offers. Unlike the DG, there is a self-interested motive for UG giving: If participants expect the rejection of unfair offers, they have a monetary incentive to be fair even to out-group members. Thus, we predicted substantial bias in the DG but little bias in the UG. We tested this hypothesis in two studies (N = 3,546) employing a 2 (in-group/out-group, based on abortion position) × 2 (DG/UG) design. We observed the predicted significant group by game interaction, such that the substantial in-group favoritism observed in the DG was almost entirely eliminated in the UG: Giving the recipient bargaining power reduced the premium offered to in-group members by 77.5%.

Discussion
Here we have provided evidence that self-interest has the potential to override in-group bias based on a salient and highly charged real-world grouping (abortion stance). In the DG, where participants had the power to offer whatever they liked, we saw clear evidence of behavior favoring in-group members. In the UG, where the recipient could reject the offer, acting on such biases had the potential to severely reduce earnings. Participants anticipated this, as shown by their expectations of partner behavior, and made fair offers to both in-group and out-group participants.

Traditionally, self-interest is considered a negative force in intergroup relations. For example, an individual might give free reign to a preference for interacting with similar others, and even be willing to pay a cost to satisfy those preferences, resulting in what has been called “taste-based” discrimination (Becker, 1957). Although we do not deny that such discrimination can (and often does) occur, we suggest that in the right context, the costs it can impose serve as a disincentive. In particular, when strategic concerns are heightened, as they are in multilateral interactions where the parties must come to an agreement and failing to do so is both salient and costly (such as the UG), self-interest has the opportunity to mitigate biased behavior. Here, we provide one example of such a situation: We find that participants successfully withheld bias in the UG, making equally fair offers to both in-group and out-group recipients.

Monday, August 26, 2019

Proprietary Algorithms for Polygenic Risk: Protecting Scientific Innovation or Hiding the Lack of It?

A. Cecile & J.W. Janssens
Genes 2019, 10(6), 448
https://doi.org/10.3390/genes10060448

Abstract

Direct-to-consumer genetic testing companies aim to predict the risks of complex diseases using proprietary algorithms. Companies keep algorithms as trade secrets for competitive advantage, but a market that thrives on the premise that customers can make their own decisions about genetic testing should respect customer autonomy and informed decision making and maximize opportunities for transparency. The algorithm itself is only one piece of the information that is deemed essential for understanding how prediction algorithms are developed and evaluated. Companies should be encouraged to disclose everything else, including the expected risk distribution of the algorithm when applied in the population, using a benchmark DNA dataset. A standardized presentation of information and risk distributions allows customers to compare test offers and scientists to verify whether the undisclosed algorithms could be valid. A new model of oversight in which stakeholders collaboratively keep a check on the commercial market is needed.

Here is the conclusion:

Oversight of the direct-to-consumer market for polygenic risk algorithms is complex and time-sensitive. Algorithms are frequently adapted to the latest scientific insights, which may make evaluations obsolete before they are completed. A standardized format for the provision of essential information could readily provide insight into the logic behind the algorithms, the rigor of their development, and their predictive ability. The development of this format gives responsible providers the opportunity to lead by example and show that much can be shared when there is nothing to hide.

Wednesday, August 14, 2019

Getting AI ethics wrong could 'annihilate technical progress'

Richard Gray
TechXplore
Originally published July 30, 2019

Here is an excerpt:

Biases

But these algorithms can also learn the biases that already exist in data sets. If a police database shows that mainly young, black men are arrested for a certain crime, it may not be a fair reflection of the actual offender profile and instead reflect historic racism within a force. Using AI taught on this kind of data could exacerbate problems such as racism and other forms of discrimination.

"Transparency of these algorithms is also a problem," said Prof. Stahl. "These algorithms do statistical classification of data in a way that makes it almost impossible to see how exactly that happened." This raises important questions about how legal systems, for example, can remain fair and just if they start to rely upon opaque 'black box' AI algorithms to inform sentencing decisions or judgements about a person's guilt.

The next step for the project will be to look at potential interventions that can be used to address some of these issues. It will look at where guidelines can help ensure AI researchers build fairness into their algorithms, where new laws can govern their use and if a regulator can keep negative aspects of the technology in check.

But one of the problems many governments and regulators face is keeping up with the fast pace of change in new technologies like AI, according to Professor Philip Brey, who studies the philosophy of technology at the University of Twente, in the Netherlands.

"Most people today don't understand the technology because it is very complex, opaque and fast moving," he said. "For that reason it is hard to anticipate and assess the impacts on society, and to have adequate regulatory and legislative responses to that. Policy is usually significantly behind."

The info is here.

Tuesday, February 26, 2019

Strengthening Our Science: AGU Launches Ethics and Equity Center

Robyn Bell
EOS.org
Originally published February 14, 2019

In the next century, our species will face a multitude of challenges. A diverse and inclusive community of researchers ready to lead the way is essential to solving these global-scale challenges. While Earth and space science has made many positive contributions to society over the past century, our community has suffered from a lack of diversity and a culture that tolerates unacceptable and divisive conduct. Bias, harassment, and discrimination create a hostile work climate, undermining the entire global scientific enterprise and its ability to benefit humanity.

As we considered how our Centennial can launch the next century of amazing Earth and space science, we focused on working with our community to build diverse, inclusive, and ethical workplaces where all participants are encouraged to develop their full potential. That’s why I’m so proud to announce the launch of the AGU Ethics and Equity Center, a new hub for comprehensive resources and tools designed to support our community across a range of topics linked to ethics and workplace excellence. The Center will provide resources to individual researchers, students, department heads, and institutional leaders. These resources are designed to help share and promote leading practices on issues ranging from building inclusive environments, to scientific publications and data management, to combating harassment, to example codes of conduct. AGU plans to transform our culture in scientific institutions so we can achieve inclusive excellence.

The info is here.

Tuesday, February 12, 2019

Certain Moral Values May Lead to More Prejudice, Discrimination

American Psychological Association Pressor
Released December 20, 2018

People who value following purity rules over caring for others are more likely to view gay and transgender people as less human, which leads to more prejudice and support for discriminatory public policies, according to a new study published by the American Psychological Association.

“After the Supreme Court decision affirming marriage equality and the debate over bathroom rights for transgender people, we realized that the arguments were often not about facts but about opposing moral beliefs,” said Andrew E. Monroe, PhD, of Appalachian State University and lead author of the study, published in the Journal of Experimental Psychology: General®.

“Thus, we wanted to understand if moral values were an underlying cause of prejudice toward gay and transgender people.”

Monroe and his co-author, Ashby Plant, PhD, of Florida State University, focused on two specific moral values — what they called sanctity, or a strict adherence to purity rules and disgust over any acts that are considered morally contaminating, and care, which centers on disapproval of others who cause suffering without just cause — because they predicted those values might be behind the often-heated debates over LGBTQ rights. 

The researchers conducted five experiments with nearly 1,100 participants. Overall, they found that people who prioritized sanctity over care were more likely to believe that gay and transgender people, people with AIDS and prostitutes were more impulsive, less rational and, therefore, something less than human. These attitudes increased prejudice and acceptance of discriminatory public policies, according to Monroe.

The info is here.

The research is here.

Wednesday, June 6, 2018

The LAPD’s Terrifying Policing Algorithm: Yes It’s Basically ‘Minority Report’

Dan Robitzski
Futurism.com
Originally posted May 11, 2018

The Los Angeles Police Department was recently forced to release documents about their predictive policing and surveillance algorithms, thanks to a lawsuit from the Stop LAPD Spying Coalition (which turned the documents over to In Justice Today). And what do you think the documents have to say?

If you guessed “evidence that policing algorithms, which require officers to keep a checklist of (and keep an eye on) 12 people deemed most likely to commit a crime, are continuing to propagate a vicious cycle of disproportionately high arrests of black Angelinos, as well as other racial minorities,” you guessed correctly.

Algorithms, no matter how sophisticated, are only as good as the information that’s provided to them. So when you feed an AI data from a city where there’s a problem of demonstrably, mathematically racist over-policing of neighborhoods with concentrations of people of color, and then have it tell you who the police should be monitoring, the result will only be as great as the process. And the process? Not so great!

The article is here.

Tuesday, April 24, 2018

When therapists face discrimination

Zara Abrams
The Monitor on Psychology - April 2018

Here is an excerpt:

Be aware of your own internalized biases. 

Reflecting on their own social, cultural and political perspectives means practitioners are less likely to be caught off guard by something a client says. “It’s important for psychologists to be aware of what a client’s biases and prejudices are bringing up for them internally, so as not to project that onto the client—it’s important to really understand what’s happening,” says Kathleen Brown, PhD, a licensed clinical psychologist and APA fellow.

For Kelly, the Atlanta-based clinical psychologist, this means she’s careful not to assume that resistant clients are treating her disrespectfully because she’s African American. Sometimes her clients, who are referred for pre-surgical evaluation and treatment, are difficult or even hostile
because their psychological intervention was mandated.

Foster an open dialogue about diversity and identity issues.

“The benefit of having that conversation, even though it can be scary or uncomfortable to bring it up in the room, is that it prevents it from festering or interfering with your ability to provide high-quality care to the client,” says Illinois-based clinical psychologist Robyn Gobin, PhD, who has experienced ageism from patients. She responds to ageist remarks by exploring what specific concerns the client has regarding her age (like Turner, she looks young). If she’s met with criticism, she tries to remain receptive, understanding that the client is vulnerable and any hostility the client expresses reflects concern for his or her own well-being. By being open and frank from the start, she shows her clients the appropriate way to confront their biases in therapy.

Of course, practitioners approach these conversations differently. If a client makes a prejudiced remark about another group, Buckman says labeling the comment as “offensive” shifts the attention from the client onto her. “It doesn’t get to the core of what’s going on with them. In the long run, exploring a way to shift how the client interacts with the ‘other’ is probably more valuable than standing up for a group in the moment.”

The information is here.

Monday, March 26, 2018

Bill to Bar LGBTQ Discrimination Stokes New Nebraska Debate

Tess Williams
US News and World Report
Originally published February 22, 2018

A bill that would prevent psychologists from discriminating against patients based on their sexual orientation or gender identity is reviving a nearly decade-old dispute in Nebraska state government.

Sen. Patty Pansing Brooks of Lincoln said Thursday that her bill would adopt the code of conduct from the American Psychiatric Association, which prevents discrimination of protected classes of people, but does not require professionals to treat patients if they lack expertise or it conflicts with their personal beliefs. The professional would have to provide an adequate referral instead.

Pansing Brooks said the bill will likely not become law, but she hopes it will bring attention to the ongoing problem. She said she hopes it will be resolved internally, but if a conclusion is not reached, she plans to call for a hearing later this year and will "not let this issue die."

The state Board of Psychology proposed new regulations in 2008, and the following year, the Department of Health and Human Services sent the changes to the Nebraska Catholic Conference for review. Pansing Brooks said she is unsure why the religious organization was given special review.

The article is here.

Monday, February 26, 2018

How Doctors Deal With Racist Patients

Sumathi Reddy
The Wall Street Journal
Originally published January 22, 2018

Her is an excerpt:

Patient discrimination against physicians and other health-care providers is an oft-ignored topic in a high-stress job where care always comes first. Experts say patients request another physician based on race, religion, gender, age and sexual orientation.

No government entity keeps track of such incidents. Neither do most hospitals. But more trainees and physicians are coming forward with stories and more hospitals and academic institutions are trying to address the issue with new guidelines and policies.

The examples span race and religion. A Korean-American doctor’s tweet about white nationalists refusing treatment in the emergency room went viral in August.

A trauma surgeon at a hospital in Charlotte, N.C., published a piece on KevinMD, a website for physicians, last year detailing his own experiences with discrimination given his Middle Eastern heritage.

Penn State College of Medicine adopted language into its patient rights policy in May that says patient requests for providers based on gender, race, ethnicity or sexual orientation won’t be honored. It adds that some requests based on gender will be evaluated on a case-by-case basis.

The article is here.

Sunday, January 7, 2018

Are human rights anything more than legal conventions?

John Tasioulas
aeon.co
Originally published April 11, 2017

We live in an age of human rights. The language of human rights has become ubiquitous, a lingua franca used for expressing the most basic demands of justice. Some are old demands, such as the prohibition of torture and slavery. Others are newer, such as claims to internet access or same-sex marriage. But what are human rights, and where do they come from? This question is made urgent by a disquieting thought. Perhaps people with clashing values and convictions can so easily appeal to ‘human rights’ only because, ultimately, they don’t agree on what they are talking about? Maybe the apparently widespread consensus on the significance of human rights depends on the emptiness of that very notion? If this is true, then talk of human rights is rhetorical window-dressing, masking deeper ethical and political divisions.

Philosophers have debated the nature of human rights since at least the 12th century, often under the name of ‘natural rights’. These natural rights were supposed to be possessed by everyone and discoverable with the aid of our ordinary powers of reason (our ‘natural reason’), as opposed to rights established by law or disclosed through divine revelation. Wherever there are philosophers, however, there is disagreement. Belief in human rights left open how we go about making the case for them – are they, for example, protections of human needs generally or only of freedom of choice? There were also disagreements about the correct list of human rights – should it include socio-economic rights, like the rights to health or work, in addition to civil and political rights, such as the rights to a fair trial and political participation?

The article is here.

Monday, July 10, 2017

When Are Doctors Too Old to Practice?

By Lucette Lagnado
The Wall Street Journal
Originally posted June 24, 2017

Here is an excerpt:

Testing older physicians for mental and physical ability is growing more common. Nearly a fourth of physicians in America are 65 or older, and 40% of these are actively involved in patient care, according to the American Medical Association. Experts at the AMA have suggested that they be screened lest they pose a risk to patients. An AMA working group is considering guidelines.

Concern over older physicians' mental states--and whether it is safe for them to care for patients--has prompted a number of institutions, from Stanford Health Care in Palo Alto, Calif., to Driscoll Children's Hospital in Corpus Christi, Texas, to the University of Virginia Health System, to adopt age-related physician policies in recent years. The goal is to spot problems, in particular signs of cognitive decline or dementia.

Now, as more institutions like Cooper embrace the measures, they are roiling some older doctors and raising questions of fairness, scientific validity--and ageism.

"It is not for the faint of heart, this policy," said Ann Weinacker, 66, the former chief of staff at the hospital and professor of medicine at Stanford University who has overseen the controversial efforts to implement age-related screening at Stanford hospital.

A group of doctors has been battling Stanford's age-based physician policies for the past five years, contending they are demeaning and discriminatory. The older doctors got the medical staff to scrap a mental-competency exam aimed at testing for cognitive impairment. Most, like Frank Stockdale, an 81-year-old breast-cancer specialist, refused to take it.

The article is here.

Tuesday, May 2, 2017

AI Learning Racism, Sexism and Other Prejudices from Humans

Ian Johnston
The Independent
Originally published April 13, 2017

Artificially intelligent robots and devices are being taught to be racist, sexist and otherwise prejudiced by learning from humans, according to new research.

A massive study of millions of words online looked at how closely different terms were to each other in the text – the same way that automatic translators use “machine learning” to establish what language means.

Some of the results were stunning.

(cut)

“We have demonstrated that word embeddings encode not only stereotyped biases but also other knowledge, such as the visceral pleasantness of flowers or the gender distribution of occupations,” the researchers wrote.

The study also implies that humans may develop prejudices partly because of the language they speak.

“Our work suggests that behaviour can be driven by cultural history embedded in a term’s historic use. Such histories can evidently vary between languages,” the paper said.

The article is here.

Thursday, March 2, 2017

Jail cells await mentally ill in Rapid City

Mike Anderson
Rapid City Journal
Originally published February 7, 2017

Mentally ill people in Rapid City who have committed no crimes will probably end up in jail because of a major policy change recently announced by Rapid City Regional Hospital.

The hospital is no longer taking in certain types of mentally ill patients and will instead contact the Pennington County Sheriff’s Office to take them into custody.

The move has prompted criticism from local law enforcement officials, who say the decision was made suddenly and without their input.

“In my view, this is the biggest step backward our community has experienced in terms of health care for mental health patients,” said Rapid City police Chief Karl Jegeris. “And though it’s legally permissible by statute to put someone in an incarceration setting, it doesn’t mean that it’s the right thing to do.”

This is the second major policy change to come out of Regional in recent days that places limits on the type of mental health care the hospital will provide.

The article is here.

Wednesday, September 28, 2016

The Ethics of Behavioral Health Information Technology

Michelle Joy, Timothy Clement, and Dominic Sisti
JAMA. Published online September 08, 2016.
doi:10.1001/jama.2016.12534

Here is an excerpt:

Individuals with mental illness and addiction experience negative stereotyping, prejudice, discrimination, distancing, and marginalization—social dynamics commonly called stigma. These dynamics are also often internalized and accepted by individuals with mental health conditions, amplifying their negative effect. Somewhat counterintuitively, stigmatizing beliefs about these patients are common among health care workers and often more common among mental health care professionals. Given these facts, the reinforcement of any stigmatizing concept within the medical record system or health information infrastructure is ethically problematic.

Stigmatizing iconography presents the potential for problematic clinical consequences. Patients with dual psychiatric and medical conditions often receive low-quality medical care and experience worse outcomes. One factor in this disparity is the phenomenon of diagnostic overshadowing. For example, diagnostic overshadowing can occur in patients with co-occurring mental illness and conditions such as cardiovascular disease or diabetes. These patients are less likely to receive appropriate medical care than patients without a mental health condition—their psychiatric conditions overshadow their other conditions, potentially biasing the clinician’s judgment about diagnosis and treatment such that the clinician may misattribute physical symptoms to mental health problems.

The article is here.

Friday, April 8, 2016

Tennessee Lawmakers Pass Bill Permitting Mental Health Professionals to Discriminate

By Eric Levitz
New York Magazine
Originally posted April 6, 2016

Tennessee's House of Representatives just passed a bill that would allow therapists who believe homosexuality is the mark of Satan to refuse to treat gay clients. More precisely, the bill allows mental-health counselors to deny treatment to anyone who seeks help with "goals, outcomes, or behaviors that conflict with the sincerely held principles of the counselors or therapist." If the bill makes it into law, Tennessee would be the first state to allow therapists to pick what kind of clients they're willing to serve.

From a certain angle, the law may appear more significant on a symbolic level than a practical one: If you're a gay teenager looking for someone to counsel you through your first same-sex relationship, it's probably in your interest to see someone who doesn't think that relationship will bring you eternal hellfire. But what's really at stake in the legislation is what the ethical code for licensed mental-health professionals in the United States will entail. The bill was drafted in reaction to the American Counseling Association's 2014 code of ethics, which warned counselors not to impose their personal values onto their clients. Tennessee's bill would allow the state's mental-health professionals to reject clients — for failing to conform to their beliefs — without losing their licenses.

The article is here.

Thursday, August 20, 2015

Algorithms and Bias: Q. and A. With Cynthia Dwork

By Claire Cane Miller
The New York Times - The Upshot
Originally posted August 10, 2015

Here is an excerpt:

Q: Some people have argued that algorithms eliminate discrimination because they make decisions based on data, free of human bias. Others say algorithms reflect and perpetuate human biases. What do you think?

A: Algorithms do not automatically eliminate bias. Suppose a university, with admission and rejection records dating back for decades and faced with growing numbers of applicants, decides to use a machine learning algorithm that, using the historical records, identifies candidates who are more likely to be admitted. Historical biases in the training data will be learned by the algorithm, and past discrimination will lead to future discrimination.

The entire article is here.

Tuesday, March 17, 2015

Straight Talk for White Men

By Nicholas Kristof
The New York Times
Originally published on February 21, 2015

Here is an excerpt:

The study found that a résumé with a name like Emily or Greg received 50 percent more callbacks than the same résumé with a name like Lakisha or Jamal. Having a white-sounding name was as beneficial as eight years’ work experience.

Then there was the study in which researchers asked professors to evaluate the summary of a supposed applicant for a post as laboratory manager, but, in some cases, the applicant was named John and in others Jennifer. Everything else was the same.

“John” was rated an average of 4.0 on a 7-point scale for competence, “Jennifer” a 3.3. When asked to propose an annual starting salary for the applicant, the professors suggested on average a salary for “John” almost $4,000 higher than for “Jennifer.”

It’s not that we white men are intentionally doing anything wrong, but we do have a penchant for obliviousness about the way we are beneficiaries of systematic unfairness. Maybe that’s because in a race, it’s easy not to notice a tailwind, and white men often go through life with a tailwind, while women and people of color must push against a headwind.

The entire article is here.

Friday, March 13, 2015

Bias, Black Lives, and Academic Medicine

By David A. Ansell and Edwin K. McDonald
The New England Journal of Medicine
Originally published February 18, 2015

Here is an excerpt:

First, there is evidence that doctors hold stereotypes based on patients' race that can influence their clinical decisions.  Implicit bias refers to unconscious racial stereotypes that grow from our personal and cultural experiences. These implicit beliefs may also stem from a lack of day-to-day interracial and intercultural interactions. Although explicit race bias is rare among physicians, an unconscious preference for whites as compared with blacks is commonly revealed on tests of implicit bias.

Second, despite physicians' and medical centers' best intentions of being equitable, black–white disparities persist in patient outcomes, medical education, and faculty recruitment.

The entire article is here.

Tuesday, December 30, 2014

When Talking About Bias Backfires

By Adam Grant and Sheryl Sandberg
The New York Times - Sunday Review
Originally published December 6, 2014

Here is an excerpt:

Rather than merely informing managers that stereotypes persisted, they added that a “vast majority of people try to overcome their stereotypic preconceptions.” With this adjustment, discrimination vanished in their studies. After reading this message, managers were 28 percent more interested in working with the female candidate who negotiated assertively and judged her as 25 percent more likable.

When we communicate that a vast majority of people hold some biases, we need to make sure that we’re not legitimating prejudice. By reinforcing the idea that people want to conquer their biases and that there are benefits to doing so, we send a more effective message: Most people don’t want to discriminate, and you shouldn’t either.

The entire article is here.

Editor's note: Read the entire article and reflect on how this can influence the way in which psychologists communicate with patients.

Thursday, October 2, 2014

Kaiser to pay $4 million fine over access to mental health services

By Cynthia H. Craft
Sacramento Bee
Originally posted September 10, 2014

Health care giant Kaiser Permanente has agreed to pay a $4 million fine to California’s overseer of managed health care following an 18-month battle with state officials over whether Kaiser blocked patients from timely access to mental health services.

(cut)

Moreover, the department found that Kaiser was likely violating state and federal mental health parity laws. The California Mental Health Parity Act requires managed care providers to provide psychiatric services that are equal in quality and access to their primary care services.

The entire article is here.