Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Opinions. Show all posts
Showing posts with label Opinions. Show all posts

Sunday, June 12, 2022

You Were Right About COVID, and Then You Weren’t

Olga Khazan
The Atlantic
Originally posted 3 MAY 22

Here are two excerpts:

Tenelle Porter, a psychologist at UC Davis, studies so-called intellectual humility, or the recognition that we have imperfect information and thus our beliefs might be wrong. Practicing intellectual humility, she says, is harder when you’re very active on the internet, or when you’re operating in a cutthroat culture. That might be why it pains me—a very online person working in the very competitive culture of journalism—to say that I was incredibly wrong about COVID at first. In late February 2020, when Smith was sounding the alarm among his co-workers, I had drinks with a colleague who asked me if I was worried about “this new coronavirus thing.”

“No!” I said. After all, I had covered swine flu, which blew over quickly and wasn’t very deadly.

A few days later, my mom called and asked me the same question. “People in Italy are staying inside their houses,” she pointed out.

“Yeah,” I said. “But SARS and MERS both stayed pretty localized to the regions they originally struck.”

Then, a few weeks later, when we were already working from home and buying dried beans, a friend asked me if she should be worried about her wedding, which was scheduled for October 2020.

“Are you kidding?” I said. “They will have figured out a vaccine or something by then.” Her wedding finally took place this month.

(cut)

Thinking like a scientist, or a scout, means “recognizing that every single one of your opinions is a hypothesis waiting to be tested. And every decision you make is an experiment where you forgot to have a control group,” Grant said. The best way to hold opinions or make predictions is to determine what you think given the state of the evidence—and then decide what it would take for you to change your mind. Not only are you committing to staying open-minded; you’re committing to the possibility that you might be wrong.

Because the coronavirus has proved volatile and unpredictable, we should evaluate it as a scientist would. We can’t hold so tightly to prior beliefs that we allow them to guide our behavior when the facts on the ground change. This might mean that we lose our masks one month and don them again the next, or reschedule an indoor party until after case numbers decrease. It might mean supporting strict lockdowns in the spring of 2020 but not in the spring of 2022. It might even mean closing schools again, if a new variant seems to attack children. We should think of masks and other COVID precautions not as shibboleths but like rain boots and umbrellas, as Ashish Jha, the White House coronavirus-response coordinator, has put it. There’s no sense in being pro- or anti-umbrella. You just take it out when it’s raining.

Saturday, August 3, 2019

When Do Robots Have Free Will? Exploring the Relationships between (Attributions of) Consciousness and Free Will

Eddy Nahmias, Corey Allen, & Bradley Loveall
Georgia State University

From the Conclusion:

If future research bolsters our initial findings, then it would appear that when people consider whether agents are free and responsible, they are considering whether the agents have capacities to feel emotions more than whether they have conscious sensations or even capacities to deliberate or reason. It’s difficult to know whether people assume that phenomenal consciousness is required for or enhances capacities to deliberate and reason. And of course, we do not deny that cognitive capacities for self-reflection, imagination, and reasoning are crucial for free and responsible agency (see, e.g., Nahmias 2018). For instance, once considering agents that are assumed to have phenomenal consciousness, such as humans, it is likely that people’s attributions of free will and responsibility decrease in response to information that an agent has severely diminished reasoning capacities. But people seem to have intuitions that support the idea that an essential condition for free will is the capacity to experience conscious emotions.  And we find it plausible that these intuitions indicate that people take it to be essential to being a free agent that one can feel the emotions involved in reactive attitudes and in genuinely caring about one’s choices and their outcomes.

(cut)

Perhaps, fiction points us towards the truth here. In most fictional portrayals of artificial intelligence and robots (such as Blade Runner, A.I., and Westworld), viewers tend to think of the robots differently when they are portrayed in a way that suggests they express and feel emotions.  No matter how intelligent or complex their behavior, the robots do not come across as free and autonomous until they seem to care about what happens to them (and perhaps others). Often this is portrayed by their showing fear of their own or others’ deaths, or expressing love, anger, or joy. Sometimes it is portrayed by the robots’ expressing reactive attitudes, such as indignation about how humans treat them, or our feeling such attitudes towards them, for instance when they harm humans.

The research paper is here.

Saturday, January 26, 2019

People use less information than they think to make up their minds

Nadav Klein and Ed O’Brien
PNAS December 26, 2018 115 (52) 13222-13227

Abstract

A world where information is abundant promises unprecedented opportunities for information exchange. Seven studies suggest these opportunities work better in theory than in practice: People fail to anticipate how quickly minds change, believing that they and others will evaluate more evidence before making up their minds than they and others actually do. From evaluating peers, marriage prospects, and political candidates to evaluating novel foods, goods, and services, people consume far less information than expected before deeming things good or bad. Accordingly, people acquire and share too much information in impression-formation contexts: People overvalue long-term trials, overpay for decision aids, and overwork to impress others, neglecting the speed at which conclusions will form. In today’s information age, people may intuitively believe that exchanging ever-more information will foster better-informed opinions and perspectives—but much of this information may be lost on minds long made up.

Significance

People readily categorize things as good or bad, a welcome adaptation that enables action and reduces information overload. The present research reveals an unforeseen consequence: People do not fully appreciate this immediacy of judgment, instead assuming that they and others will consider more information before forming conclusions than they and others actually do. This discrepancy in perceived versus actual information use reveals a general psychological bias that bears particular relevance in today’s information age. Presumably, one hopes that easy access to abundant information fosters uniformly more-informed opinions and perspectives. The present research suggests mere access is not enough: Even after paying costs to acquire and share ever-more information, people then stop short and do not incorporate it into their judgments.

Wednesday, December 12, 2018

Social relationships more important than hard evidence in partisan politics

phys.org
Dartmouth College
Originally posted November 13, 2018

Here is an excerpt:

Three factors drive the formation of social and political groups according to the research: social pressure to have stronger opinions, the relationship of an individual's opinions to those of their social neighbors, and the benefits of having social connections.

A key idea studied in the paper is that people choose their opinions and their connections to avoid differences of opinion with their social neighbors. By joining like-minded groups, individuals also prevent the psychological stress, or "cognitive dissonance," of considering opinions that do not match their own.

"Human social tendencies are what form the foundation of that political behavior," said Tucker Evans, a senior at Dartmouth who led the study. "Ultimately, strong relationships can have more value than hard evidence, even for things that some would take as proven fact."

The information is here.

The original research is here.

Thursday, November 29, 2018

Does AI Ethics Need to be More Inclusive?

Patrick Lin
Forbes.com
Originally posted October 29, 2018

Here is an excerpt:

Ethics is more than a survey of opinions

First, as the study’s authors allude to in their Nature paper and elsewhere, public attitudes don’t dictate what’s ethical or not.  People believe all kinds of crazy things—such as that slavery should be permitted—but that doesn’t mean those ethical beliefs are true or have any weight.  So, capturing responses of more people doesn’t necessarily help figure out what’s ethical or not.  Sometimes, more is just more, not better or even helpful.

This is the difference between descriptive ethics and normative ethics.  The former is more like sociology that simply seeks to describe what people believe, while the latter is more like philosophy that seeks reasons for why a belief may be justified (or not) and how things ought to be.

Dr. Edmond Awad, lead author of the Nature paper, cautioned, “What we are trying to show here is descriptive ethics: peoples’ preferences in ethical decisions.  But when it comes to normative ethics, which is how things should be done, that should be left to experts.”

Nonetheless, public attitudes are a necessary ingredient in practical policymaking, which should aim at the ethical but doesn’t always hit that mark.  If expert judgments in ethics diverge too much from public attitudes—asking more from a population than what they’re willing to agree to—that’s a problem for implementing the policy, and a resolution is needed.

The info is here.

Wednesday, October 31, 2018

Learning Others’ Political Views Reduces the Ability to Assess and Use Their Expertise in Nonpolitical Domains

Marks, Joseph and Copland, Eloise and Loh, Eleanor and Sunstein, Cass R. and Sharot, Tali.
Harvard Public Law Working Paper No. 18-22. (April 13, 2018).

Abstract

On political questions, many people are especially likely to consult and learn from those whose political views are similar to their own, thus creating a risk of echo chambers or information cocoons. Here, we test whether the tendency to prefer knowledge from the politically like-minded generalizes to domains that have nothing to do with politics, even when evidence indicates that person is less skilled in that domain than someone with dissimilar political views. Participants had multiple opportunities to learn about others’ (1) political opinions and (2) ability to categorize geometric shapes. They then decided to whom to turn for advice when solving an incentivized shape categorization task. We find that participants falsely concluded that politically like-minded others were better at categorizing shapes and thus chose to hear from them. Participants were also more influenced by politically like-minded others, even when they had good reason not to be. The results demonstrate that knowing about others’ political views interferes with the ability to learn about their competency in unrelated tasks, leading to suboptimal information-seeking decisions and errors in judgement. Our findings have implications for political polarization and social learning in the midst of political divisions.

You can download the paper here.

Probably a good resource to contemplate before discussing politics in psychotherapy.

Wednesday, August 8, 2018

The Road to Pseudoscientific Thinking

Julia Shaw
The Road to Pseudoscientific ThinkingScientific American
Originally published January 16, 2017

Here is the conclusion:

So, where to from here? Are there any cool, futuristic, applications of such insights? According to McColeman “I expect that category learning work from human learning will help computer vision moving forward, as we understand the regularities in the environment that people are picking up on. There’s still a lot of room for improvement in getting computer systems to notice the same things that people notice.” We need to help people, and computers, to avoid being distracted by unimportant, attention-grabbing, information.

The take-home message from this line of research seems to be: When fighting the post-truth war against pseudoscience and misinformation, make sure that important information is eye-catching and quickly understandable.

The information is here.

Tuesday, October 3, 2017

Facts Don’t Change People’s Minds. Here’s What Does

Ozan Varol
Helio
Originally posted September 6, 2017

Here is an excerpt:

The mind doesn’t follow the facts. Facts, as John Adams put it, are stubborn things, but our minds are even more stubborn. Doubt isn’t always resolved in the face of facts for even the most enlightened among us, however credible and convincing those facts might be.

As a result of the well-documented confirmation bias, we tend to undervalue evidence that contradicts our beliefs and overvalue evidence that confirms them. We filter out inconvenient truths and arguments on the opposing side. As a result, our opinions solidify, and it becomes increasingly harder to disrupt established patterns of thinking.

We believe in alternative facts if they support our pre-existing beliefs. Aggressively mediocre corporate executives remain in office because we interpret the evidence to confirm the accuracy of our initial hiring decision. Doctors continue to preach the ills of dietary fat despite emerging research to the contrary.

If you have any doubts about the power of the confirmation bias, think back to the last time you Googled a question. Did you meticulously read each link to get a broad objective picture? Or did you simply skim through the links looking for the page that confirms what you already believed was true? And let’s face it, you’ll always find that page, especially if you’re willing to click through to Page 12 on the Google search results.

The article is here.

Saturday, July 1, 2017

Hypocritical Flip-Flop, or Courageous Evolution? When Leaders Change Their Moral Minds.

Kreps, Tamar A.; Laurin, Kristin; Merritt, Anna C.
Journal of Personality and Social Psychology, Jun 08 , 2017

Abstract

How do audiences react to leaders who change their opinion after taking moral stances? We propose that people believe moral stances are stronger commitments, compared with pragmatic stances; we therefore explore whether and when audiences believe those commitments can be broken. We find that audiences believe moral commitments should not be broken, and thus that they deride as hypocritical leaders who claim a moral commitment and later change their views. Moreover, they view them as less effective and less worthy of support. Although participants found a moral mind changer especially hypocritical when they disagreed with the new view, the effect persisted even among participants who fully endorsed the new view. We draw these conclusions from analyses and meta-analyses of 15 studies (total N = 5,552), using recent statistical advances to verify the robustness of our findings. In several of our studies, we also test for various possible moderators of these effects; overall we find only 1 promising finding: some evidence that 2 specific justifications for moral mind changes—citing a personally transformative experience, or blaming external circumstances rather than acknowledging opinion change—help moral leaders appear more courageous, but no less hypocritical. Together, our findings demonstrate a lay belief that moral views should be stable over time; they also suggest a downside for leaders in using moral framings.

The article is here.

Wednesday, April 12, 2017

Why People Continue to Believe Objectively False Things

Amanda Taub and Brendan Nyhan
New York Times - The Upshot
Originally posted March 22, 2017

Here is an excerpt:

Even when myths are dispelled, their effects linger. The Boston College political scientist Emily Thorson conducted a series of studies showing that exposure to a news article containing a damaging allegation about a fictional political candidate caused people to rate the candidate more negatively even when the allegation was corrected and people believed it to be false.

There are ways to correct information more effectively. Adam Berinsky of M.I.T., for instance, found that a surprising co-partisan source (a Republican member of Congress) was the most effective in reducing belief in the “death panel” myth about the Affordable Care Act.

But in the wiretapping case, Republican lawmakers have neither supported Mr. Trump’s wiretap claims (which could risk their credibility) nor strenuously opposed them (which could prompt a partisan backlash). Instead, they have tried to shift attention to a different political narrative — one that suits the partisan divide by making Mr. Obama the villain of the piece. Rather than focusing on the wiretap allegation, they have sought to portray the House Intelligence Committee hearings on Russian interference in the election as an investigation into leaks of classified information.

The article is here.

Friday, January 27, 2017

Moral Grandstanding

Justin Tosi and Brandon Warmke
Philosophy and Public Affairs
First published June 2016

Kurt Baier wrote that “moral talk is often rather repugnant. Leveling moral accusations, expressing moral indignation, passing moral judgment, allotting the blame, administering moral reproof, justifying oneself, and, above all, moralizing—who can enjoy such talk?” (1965: 3). When public moral discourse is at its best, we think that these features (if they are present at all) are unobjectionable. But we also think that, to some degree, Baier is right: public moral discourse—that is, talk intended to bring some matter of moral significance to the public consciousness—sometimes fails to live up to its ideal. Public moral discourse can go wrong in many ways. One such way is a phenomenon we believe to be pervasive: moral grandstanding (hereafter: “grandstanding”). We begin by developing an account of grandstanding. We then show that our account, with support from some standard theses of social psychology, explains the characteristic ways that grandstanding is manifested in public moral discourse. We conclude by arguing that there are good reasons to think that moral grandstanding is typically morally bad and should be avoided.

The article is here.

Monday, January 4, 2016

Why are we humans so prone to believing spooky nonsense?

Stephen Law
Aeon - Opinions
Originally published December 15, 2015

Scientists working in the cognitive science of religion have offered other explanations, including the hyperactive agency-detecting device (HADD). This tendency explains why a rustle in the bushes in the dark prompts the instinctive thought: ‘There’s someone there!’ We seem to have evolved to be extremely quick to ascribe agency – the capacity for intention and action – even to inanimate objects. In our ancestral environment, this tendency is not particularly costly in terms of survival and reproduction, but a failure to detect agents that are there can be very costly. Fail to detect a sabre-toothed cat, and it’ll likely take you out of the gene pool. The evolution of a HADD can account for the human tendency to believe in the presence of agents even when none can actually be observed. Hence the human belief in invisible person-like beings, such as spirits or gods. There are also forms of supernatural belief that don’t fit the ‘invisible person-like being’ mould, but merely posit occult forces – eg, feng shui, supernaturally understood – but the HADD doesn’t account for such beliefs.

The article is here.

Wednesday, September 30, 2015

New Threats to Academic Freedom

Francesca Minerva
Bioethics, (2014): 28(4); 157–162

Here is an excerpt:

In the first few days following online publication, we were deluged with an average of 30 death threats and hate emails a day. Many blogs and online newspapers reported the news and thousands of Twitter, Facebook and Google+ users shared the links and commented on the articles.

The discussion, largely in public rather than in academic journals, did not focus exclusively on the arguments of the paper but also on the authors. Perhaps attesting to an underlying current of sexism, the personal attacks were largely directed at me: I am a young woman and young women are supposed to have babies, not to argue in favour of after-birth abortion. This disparity got to the point that some newspapers even neglected to mention that the paper was co-authored, indicating me as the only author.

The different treatment of Singer’s and Tooley’s work on the same topic on the one hand,
and our paper on the other shows how the Web has changed the way academic ideas circulate. It is useful to highlight at least three aspects of this change:

1) The Internet has significantly speeded up the dissemination of academic ideas to the general public. Up to twenty years ago, access to academic work was almost exclusively through academic books and hard copy academic journals. Nowadays, many academic  journals maintain an online version which is easily and quickly accessible. Journalists can read academic papers and write a piece for an online newspaper, which may be shared by millions of users on other websites, or Blogs, and social network sites.

The entire paper is here.

Monday, March 23, 2015

Why Our Children Don’t Think There Are Moral Facts

By Justin P. McBrayer
The New York Times - Opinionator
Originally posted March 2, 2105

Here is an excerpt:


In summary, our public schools teach students that all claims are either facts or opinions and that all value and moral claims fall into the latter camp. The punchline: there are no moral facts. And if there are no moral facts, then there are no moral truths. 

The inconsistency in this curriculum is obvious. For example, at the outset of the school year, my son brought home a list of student rights and responsibilities. Had he already read the lesson on fact vs. opinion, he might have noted that the supposed rights of other students were based on no more than opinions. According to the school’s curriculum, it certainly wasn’t true that his classmates deserved to be treated a particular way — that would make it a fact. Similarly, it wasn’t really true that he had any responsibilities — that would be to make a value claim a truth. It should not be a surprise that there is rampant cheating on college campuses: If we’ve taught our students for 12 years that there is no fact of the matter as to whether cheating is wrong, we can’t very well blame them for doing so later on. 

The entire article is here.

Thursday, August 15, 2013

Sherman and Rowes: Psychological Warfare (Licensed) in Kentucky

By PAUL SHERMAN  AND JEFF ROWES
The Wall Street Journal
Originally published July 16, 2013

Was Dear Abby a career criminal? Can "The Dr. Oz Show" show be censored? Absolutely—at least according to the Kentucky attorney general and the state's Board of Examiners of Psychology, which just banned one of the most popular advice columns in the United States from all of Kentucky's newspapers.

This act of censorship has forced a showdown in federal court over one of the most important unanswered questions in First Amendment law: Can occupational-licensing laws—which require the government's permission to work—trump free speech? Some government licensing boards, which function increasingly as censors, certainly think the answer is yes.

The entire story is here.

Thanks to Don McAleer for this story.

Friday, January 20, 2012

What Opinions Can Psychologists Give About Persons They Have Never Met?

What Opinions Can Psychologists Give About Persons Whom They Have Never Met?