Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Objectivity. Show all posts
Showing posts with label Objectivity. Show all posts

Sunday, October 25, 2020

The objectivity illusion and voter polarization in the 2016 presidential election

M. C. Schwalbe, G. L. Cohen, L. D. Ross
PNAS Sep 2020, 117 (35) 21218-21229; 

Abstract

Two studies conducted during the 2016 presidential campaign examined the dynamics of the objectivity illusion, the belief that the views of “my side” are objective while the views of the opposing side are the product of bias. In the first, a three-stage longitudinal study spanning the presidential debates, supporters of the two candidates exhibited a large and generally symmetrical tendency to rate supporters of the candidate they personally favored as more influenced by appropriate (i.e., “normative”) considerations, and less influenced by various sources of bias than supporters of the opposing candidate. This study broke new ground by demonstrating that the degree to which partisans displayed the objectivity illusion predicted subsequent bias in their perception of debate performance and polarization in their political attitudes over time, as well as closed-mindedness and antipathy toward political adversaries. These associations, furthermore, remained significant even after controlling for baseline levels of partisanship. A second study conducted 2 d before the election showed similar perceptions of objectivity versus bias in ratings of blog authors favoring the candidate participants personally supported or opposed. These ratings were again associated with polarization and, additionally, with the willingness to characterize supporters of the opposing candidate as evil and likely to commit acts of terrorism. At a time of particular political division and distrust in America, these findings point to the exacerbating role played by the illusion of objectivity.

Significance

Political polarization increasingly threatens democratic institutions. The belief that “my side” sees the world objectively while the “other side” sees it through the lens of its biases contributes to this political polarization and accompanying animus and distrust. This conviction, known as the “objectivity illusion,” was strong and persistent among Trump and Clinton supporters in the weeks before the 2016 presidential election. We show that the objectivity illusion predicts subsequent bias and polarization, including heightened partisanship over the presidential debates. A follow-up study showed that both groups impugned the objectivity of a putative blog author supporting the opposition candidate and saw supporters of that opposing candidate as evil.

Tuesday, September 8, 2020

Pharma drew a line in the sand over Covid-19 vaccine readiness, because someone had to

Ed Silverman
statnews.com
Originally posted 7 Sept 20

Here is an excerpt:

The vaccine makers that are signing this pledge — Pfizer, Merck, AstraZeneca, Sanofi, GlaxoSmithKline, BioNTech, Johnson & Johnson, Moderna, and Novavax — are rushing to complete clinical trials. But only Pfizer has indicated it may have late-stage results in October, and that’s not a given.

Yet any move by the FDA to green light a Covid-19 vaccine without late-stage results will be interpreted as an effort to boost Trump — and rightly so.

Consider Trump’s erratic and selfish remarks. He recently accused the FDA of slowing the vaccine approval process and being part of a “deep state.” No wonder there is concern he may lean on Hahn to authorize emergency use prematurely. For his part, Hahn has insisted he won’t buckle to political pressure, but he also said emergency use may be authorized based on preliminary data.

“It’s unprecedented in my experience that industry would do something like this,” said Ira Loss of Washington Analysis, who tracks pharmaceutical regulatory and legislative matters for investors. “But we’ve experienced unprecedented events since the beginning of Covid-19, starting with the FDA, where the commissioner has proven to be malleable, to be kind, at the foot of the president.”

Remember, we’ve seen this movie before.

Amid criticism of his handling of the pandemic, Trump touted hydroxychloroquine, a decades-old malaria tablet, as a salve and the FDA authorized emergency use. Two weeks ago, he touted convalescent blood plasma as a medical breakthrough, but evidence of its effectiveness against the coronavirus is inconclusive. And Hahn initially overstated study results.

Most Americans seem to be catching on. A STAT-Harris poll released last week found that 78% of the public believes the vaccine approval process is driven by politics, not science. This goes for a majority of Democrats and Republicans.

The info is here.

Wednesday, October 24, 2018

Open Letter: Netflix's "Afflicted" Abandoning Ethics and Science

Maya Dusenbery
Pacific Standard
Originally published September 20, 2018

Here are two excerpts:

The problem is not that the series included these skeptical views. To be sure, one of the most difficult parts of being ill with these "contested" conditions—or, for that matter, even a well-accepted but "invisible" chronic disease—is contending with such doubts, which are pervasive among friends and family, the media, and the medical profession at large. But according to the participants, in many cases, interviews with their family and friends were deceptively edited to make them appear more skeptical than they actually are. In some cases, clips in which family members acknowledged they'd wondered if their loved one's problem was psychological early on in their illness were taken out of context to imply they still harbored those beliefs. In others, producers seem to have put words into their mouths: According to Jamison, interviewees were asked to start their answers by repeating the question they had been asked. This is how the producers managed to get a clip of his mom seemingly questioning if "hypochondria" was a component of her son's illness.

(cut)

Even more irresponsible is the inclusion of such psychological speculation by various unqualified doctors. Presented as experts despite the fact that they have not examined the participants and are not specialists in their particular conditions, they muse vaguely about the power of the mind to produce physical symptoms. A single psychiatrist, who has never evaluated any of the subjects, is quoted extensively throughout. In Episode 4, which is entitled "The Mind," he gets right to the point: "Statistically, it's more likely that the cause of the problem is a common psychiatric problem more than it is an unknown or uncatalogued physical illness. You can be deluded that you're sick, meaning you can believe you're sick when in fact you're not sick."

The info is here.

Wednesday, October 17, 2018

Machine Ethics and Artificial Moral Agents

Francesco Corea
Medium.com
Originally posted July 6, 2017

Here is an excerpt:

However, let’s look at the problem from a different angle. I was educated as an economist, so allow me to start my argument with this statement: let’s assume we have the perfect dataset. It is not only omni-comprehensive but also clean, consistent and deep both longitudinally and temporally speaking.

Even in this case, we have no guarantee AI won’t learn the same bias autonomously as we did. In other words, removing biases by hand or by construction is not a guarantee of those biases to not come out again spontaneously.

This possibility also raises another (philosophical) question: we are building this argument from the assumption that biases are bad (mostly). So let’s say the machines come up with a result we see as biased, and therefore we reset them and start again the analysis with new data. But the machines come up with a similarly ‘biased result’. Would we then be open to accepting that as true and revision what we consider to be biased?

This is basically a cultural and philosophical clash between two different species.

In other words, I believe that two of the reasons why embedding ethics into machine designing is extremely hard is that i) we don’t really know unanimously what ethics is, and ii) we should be open to admit that our values or ethics might not be completely right and that what we consider to be biased is not the exception but rather the norm.

Developing a (general) AI is making us think about those problems and it will change (if it hasn’t already started) our values system. And perhaps, who knows, we will end up learning something from machines’ ethics as well.

The info is here.

Monday, August 6, 2018

False Equivalence: Are Liberals and Conservatives in the U.S. Equally “Biased”?

Jonathan Baron and John T. Jost
Invited Revision, Perspectives on Psychological Science.

Abstract

On the basis of a meta-analysis of 51 studies, Ditto, Liu, Clark, Wojcik, Chen, et al. (2018) conclude that ideological “bias” is equivalent on the left and right of U.S. politics. In this commentary, we contend that this conclusion does not follow from the review and that Ditto and colleagues are too quick to embrace a false equivalence between the liberal left and the conservative right. For one thing, the issues, procedures, and materials used in studies reviewed by Ditto and colleagues were selected for purposes other than the inspection of ideological asymmetries. Consequently, methodological choices made by researchers were systematically biased to avoid producing differences between liberals and conservatives. We also consider the broader implications of a normative analysis of judgment and decision-making and demonstrate that the “bias” examined by Ditto and colleagues is not, in fact, an irrational bias, and that it is incoherent to discuss bias in the absence of standards for assessing accuracy and consistency. We find that Jost’s (2017) conclusions about domain-general asymmetries in motivated social cognition, which suggest that epistemic virtues are more prevalent among liberals than conservatives, are closer to the truth of the matter when it comes to current American politics. Finally, we question the notion that the research literature in psychology is necessarily characterized by “liberal bias,” as several authors have claimed.

Here is the end:

 If academics are disproportionately liberal—in comparison with society at large—it just might
be due to the fact that being liberal in the early 21st century is more compatible with the epistemic standards, values, and practices of academia than is being conservative.

The article is here.

See Your Surgeon Is Probably a Republican, Your Psychiatrist Probably a Democrat as an other example.

Tuesday, June 5, 2018

Is There Such a Thing as Truth?

Errol Morris
Boston Review
Originally posted April 30, 2018

Here is an excerpt:

In fiction, we are often given an imaginary world with seemingly real objects—horses, a coach, a three-cornered hat and wig. But what about the objects of science—positrons, neutrinos, quarks, gravity waves, Higgs bosons? How do we reckon with their reality?

And truth. Is there such a thing? Can we speak of things as unambiguously true or false? In history, for example, are there things that actually happened? Louis XVI guillotined on January 21, 1793, at what has become known as the Place de la Concorde. True or false? Details may be disputed—a more recent example: how large, comparatively, was Donald Trump’s victory in the electoral college in 2016, or the crowd at his inauguration the following January? 
But do we really doubt that Louis’s bloody head was held up before the assembled crowd? Or doubt the existence of the curved path of a positron in a bubble chamber? Even though we might not know the answers to some questions—“Was Louis XVI decapitated?” or “Are there positrons?”—we accept that there are answers.

And yet, we read about endless varieties of truth. Coherence theories of truth. Pragmatic, relative truths. Truths for me, truths for you. Dog truths, cat truths. Whatever. I find these discussions extremely distasteful and unsatisfying. To say that a philosophical system is “coherent” tells me nothing about whether it is true. Truth is not hermetic. I cannot hide out in a system and assert its truth. For me, truth is about the relation between language and the world. A correspondence idea of truth. Coherence theories of truth are of little or no interest to me. Here is the reason: they are about coherence, not truth. We are talking about whether a sentence or a paragraph
 or group of paragraphs is true when set up against the world. Thackeray, introducing the fictional world of Vanity Fair, evokes the objects of a world he is familiar with—“a large family coach, with two fat horses in blazing harnesses, driven by a fat coachman in a three-cornered hat and wig, at the rate of four miles an hour.”

The information is here.

Wednesday, May 30, 2018

Reining It In: Making Ethical Decisions in a Forensic Practice

Donna M. Veraldi and Lorna Veraldi
A Paper Presented to American College of Forensic Psychology
34th Annual Symposium, San Diego, CA

Here is an excerpt:

Ethical dilemmas sometimes require making difficult choices among competing ethical principles and values. This presentation will discuss ethical dilemmas arising from the use of coercion and deception in forensic practice. In a forensic practice, the choice is not as simple as “do no harm” or “tell the truth.” What is and is not acceptable in terms of using various forms of pressure on individuals or of assisting agencies that put pressure on individuals? How much information should forensic psychologists share with individuals about evaluation techniques? What does informed consent
mean in the context of a forensic practice where many of the individuals with whom we interact are not there by choice?

The information is here.

Wednesday, May 16, 2018

Escape the Echo Chamber

C Thi Nguyen
www.medium.com
Originally posted April 12, 2018

Something has gone wrong with the flow of information. It’s not just that different people are drawing subtly different conclusions from the same evidence. It seems like different intellectual communities no longer share basic foundational beliefs. Maybe nobody cares about the truth anymore, as some have started to worry. Maybe political allegiance has replaced basic reasoning skills. Maybe we’ve all become trapped in echo chambers of our own making — wrapping ourselves in an intellectually impenetrable layer of likeminded friends and web pages and social media feeds.

But there are two very different phenomena at play here, each of which subvert the flow of information in very distinct ways. Let’s call them echo chambers and epistemic bubbles. Both are social structures that systematically exclude sources of information. Both exaggerate their members’ confidence in their beliefs. But they work in entirely different ways, and they require very different modes of intervention. An epistemic bubble is when you don’t hear people from the other side. An echo chamber is what happens when you don’t trustpeople from the other side.

Current usage has blurred this crucial distinction, so let me introduce a somewhat artificial taxonomy. An ‘epistemic bubble’ is an informational network from which relevant voices have been excluded by omission. That omission might be purposeful: we might be selectively avoiding contact with contrary views because, say, they make us uncomfortable. As social scientists tell us, we like to engage in selective exposure, seeking out information that confirms our own worldview. But that omission can also be entirely inadvertent. Even if we’re not actively trying to avoid disagreement, our Facebook friends tend to share our views and interests. When we take networks built for social reasons and start using them as our information feeds, we tend to miss out on contrary views and run into exaggerated degrees of agreement.

The information is here.

Monday, April 16, 2018

The Seth Rich lawsuit matters more than the Stormy Daniels case

Jill Abramson
The Guardian
Originally published March 20, 2018

Here is an excerpt:

I’ve previously written about Fox News’ shameless coverage of the 2016 unsolved murder of a young former Democratic National Committee staffer named Seth Rich. Last week, ABC News reported that his family has filed a lawsuit against Fox, charging that several of its journalists fabricated a vile story attempting to link the hacked emails from Democratic National Committee computers to Rich, who worked there.

After the fabricated story ran on the Fox website, it was retracted, but not before various on-air stars, especially Trump mouthpiece Sean Hannity, flogged the bogus conspiracy theory suggesting Rich had something to do with the hacked messages.

This shameful episode demonstrated, once again, that Rupert Murdoch’s favorite network, and Trump’s, has no ethical compass and had no hesitation about what grief this manufactured story caused to the 26-year-old murder victim’s family. It’s good to see them striking back, since that is the only tactic that the Murdochs and Trumps of the world will respect or, perhaps, will force them to temper the calumny they spread on a daily basis.

Of course, the Rich lawsuit does not have the sex appeal of the Stormy case. The rightwing echo chamber will brazenly ignore its self-inflicted wounds. And, for the rest of the cable pundit brigades, the DNC emails and Rich are old news.

The article is here.

Monday, March 19, 2018

‘The New Paradigm,’ Conscience and the Death of Catholic Morality

E. Christian Brugger
National Catholic Register
Originally published February 23, 2-18

Vatican Secretary of State Cardinal Pietro Parolin, in a recent interview with Vatican News, contends the controversial reasoning expressed in the apostolic exhortation Amoris Laetitia (The Joy of Love) represents a “paradigm shift” in the Church’s reasoning, a “new approach,” arising from a “new spirit,” which the Church needs to carry out “the process of applying the directives of Amoris Laetitia.”

His reference to a “new paradigm” is murky. But its meaning is not. Among other things, he is referring to a new account of conscience that exalts the subjectivity of the process of decision-making to a degree that relativizes the objectivity of the moral law. To understand this account, we might first look at a favored maxim of Pope Francis: “Reality is greater than ideas.”

It admits no single-dimensional interpretation, which is no doubt why it’s attractive to the “Pope of Paradoxes.” But in one area, the arena of doctrine and praxis, a clear meaning has emerged. Dogma and doctrine constitute ideas, while praxis (i.e., the concrete lived experience of people) is reality: “Ideas — conceptual elaborations — are at the service of … praxis” (Evangelii Gaudium, 232).

In relation to the controversy stirred by Amoris Laetitia, “ideas” is interpreted to mean Church doctrine on thorny moral issues such as, but not only, Communion for the divorced and civilly remarried, and “reality” is interpreted to mean the concrete circumstances and decision-making of ordinary Catholics.

The article is here.

Monday, March 12, 2018

The tech bias: why Silicon Valley needs social theory

Jan Bier
aeon.com
Originally posted February 14, 2018

Here is an excerpt:

That Google memo is an extreme example of an imbalance in how different ways of knowing are valued. Silicon Valley tech companies draw on innovative technical theory but have yet to really incorporate advances in social theory. The inattention to such knowledge becomes all too apparent when algorithms fail in their real-life applications – from automated soap-dispensers that fail to turn on when a user has dark brown skin, to the new iPhone X’s inability to distinguish among different Asian women.

Social theorists in fields such as sociology, geography, and science and technology studies have shown how race, gender and class biases inform technical design. So there’s irony in the fact that employees hold sexist and racist attitudes, yet ‘we are supposed to believe that these same employees are developing “neutral” or “objective” decision-making tools’, as the communications scholar Safiya Umoja Noble at the University of Southern California argues in her book Algorithms of Oppression (2018).

In many cases, what’s eroding the value of social knowledge is unintentional bias – on display when prominent advocates for equality in science and tech undervalue research in the social sciences. The physicist Neil DeGrasse Tyson, for example, has downplayed the link between sexism and under-representation in science. Apparently, he’s happy to ignore extensive research pointing out that the natural sciences’ male-dominated institutional cultures are a major cause of the attrition of female scientists at all stages of their careers.

The article is here.

Saturday, November 25, 2017

Rather than being free of values, good science is transparent about them

Kevin Elliott
The Conversation
Originally published November 8, 2017

Scientists these days face a conundrum. As Americans are buffeted by accounts of fake news, alternative facts and deceptive social media campaigns, how can researchers and their scientific expertise contribute meaningfully to the conversation?

There is a common perception that science is a matter of hard facts and that it can and should remain insulated from the social and political interests that permeate the rest of society. Nevertheless, many historians, philosophers and sociologists who study the practice of science have come to the conclusion that trying to kick values out of science risks throwing the baby out with the bathwater.

Ethical and social values – like the desire to promote economic development, public health or environmental protection – often play integral roles in scientific research. By acknowledging this, scientists might seem to give away their authority as a defense against the flood of misleading, inaccurate information that surrounds us. But I argue in my book “A Tapestry of Values: An Introduction to Values in Science” that if scientists take appropriate steps to manage and communicate about their values, they can promote a more realistic view of science as both value-laden and reliable.

The article is here.

Tuesday, January 24, 2017

Explanatory Judgment, Moral Offense and Value-Free Science

Matteo Colombo
Imperfect Cognitions
Originally posted September 27, 2016

Here is the conclusion:

Our findings indicate that people’s judgements about scientific results are often imbued with moral value. While this conclusion suggests that, as a matter of psychological fact, the ideal of a value-free science may not be achievable, it raises important questions about the attainment of scientific knowledge in democratic societies. How can scientific evidence be more effectively conveyed to the public? What is it that drives public controversy over such issues as climate change, vaccinations and genetically modified organisms? Does the prevalent political and moral homogeneity in many present-day scientific communities hinder or systematically bias their pursuit of knowledge?

The blog post is here.

Editor's Note: Value-free or objective psychotherapy is a myth. We always brings our morals and values into the psychotherapy relationship.

Tuesday, November 22, 2016

When Disagreement Gets Ugly: Perceptions of Bias and the Escalation of Conflict

Kathleen A. Kennedy and Emily Pronin
Pers Soc Psychol Bull 2008 34: 833

Abstract

It is almost a truism that disagreement produces conflict. This article suggests that perceptions of bias can drive this relationship. First, these studies show that people perceive those who disagree with them as biased. Second, they show that the conflict-escalating approaches that people take toward those who disagree with them are mediated by people's tendency to perceive those who disagree with them as biased. Third, these studies manipulate the mediator and show that experimental manipulations that prompt people to perceive adversaries as biased lead them to respond more conflictually—and that such responding causes those who engage in it to be viewed as more biased and less worthy of cooperative gestures. In summary, this article provides evidence for a “bias-perception conflict spiral,” whereby people who disagree perceive each other as biased, and those perceptions in turn lead them to take conflict-escalating actions against each other (which in turn engender further perceptions of bias, continuing the spiral).

The article is here.

For those who do marital counseling or work in any adversarial system.

Tuesday, November 15, 2016

The Inevitable Evolution of Bad Science

Ed Yong
The Atlantic
Originally published September 21, 2016

Here is an excerpt:

In the model, as in real academia, positive results are easier to publish than negative one, and labs that publish more get more prestige, funding, and students. They also pass their practices on. With every generation, one of the oldest labs dies off, while one of the most productive one reproduces, creating an offspring that mimics the research style of the parent. That’s the equivalent of a student from a successful team starting a lab of their own.

Over time, and across many simulations, the virtual labs inexorably slid towards less effort, poorer methods, and almost entirely unreliable results. And here’s the important thing: Unlike the hypothetical researcher I conjured up earlier, none of these simulated scientists are actively trying to cheat. They used no strategy, and they behaved with integrity. And yet, the community naturally slid towards poorer methods. What the model shows is that a world that rewards scientists for publications above all else—a world not unlike this one—naturally selects for weak science.

“The model may even be optimistic,” says Brian Nosek from the Center of Open Science, because it doesn’t account for our unfortunate tendency to justify and defend the status quo. He notes, for example, that studies in the social and biological sciences are, on average, woefully underpowered—they are too small to find reliable results.

The article is here.

Saturday, October 31, 2015

Why the Free Will Debate Never Ends

By Julian Baggini
The Philosophers Magazine
Originally published October 13, 2015

Here is an excerpt:

Smilansky is speculating about optimism and pessimism. But one study has come up with some empirical evidence that extraversion and introversion are correlated with beliefs about free will, concluding that “extraversion predicts, to a significant extent, those who have compatibilist versus incompatibilist intuitions.”

Many are appalled by this idea as it goes against the whole notion that philosophy is about arguments, not arguers. But you only need to read the biographies and autobiographies of great philosophers to see that their personalities are intimately tied up with their ideas. W V O Quine, for instance, recalled how as a toddler he sought the unfamiliar way home, which he interpreted as reflecting “the thrill of discovery in theoretical science: the reduction of the unfamiliar to the familiar.” Later, he was obsessed with crossing state lines and national borders, ticking each off on a list as he did so. Paul Feyerabend recalled how, not yet ten, he was enchanted by magic and mystery and wasn’t affected by “the many strange events that seemed to make up our world.” Only a philosopher with delusions of her subject's objectivity would be surprised to find out that Quine and Feyerabend went on to write very different kinds of philosophy: Quine’s in a formal, logical, systematising tradition (though typically on the limits of such formalisations); Feyerabend’s anti-reductive and anti-systematising. It would take a great deal of faith in the objectivity of philosophy and philosophers to think that Feyerabend and Quine arrived at their respective philosophical positions simply by following the arguments where they led, when their inclinations so obviously seem to be in tune with their settled conclusions.

The entire article is here.

Tuesday, September 15, 2015

Explanatory Judgment, Moral Offense and Value-Free Science

By Matteo Colombo, Leandra Bucher, & Yoel Inbar
Review of Philosophy and Psychology
August 2015

Abstract

A popular view in philosophy of science contends that scientific reasoning is objective to the extent that the appraisal of scientific hypotheses is not influenced by moral, political, economic, or social values, but only by the available evidence. A large body of results in the psychology of motivated-reasoning has put pressure on the empirical adequacy of this view. The present study extends this body of results by providing direct evidence that the moral offensiveness of a scientific hypothesis biases explanatory judgment along several dimensions, even when prior credence in the hypothesis is controlled for. Furthermore, it is shown that this bias is insensitive to an economic incentive to be accurate in the evaluation of the evidence. These results contribute to call into question the attainability of the ideal of a value-free science.

The entire article is here.