Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Limitations. Show all posts
Showing posts with label Limitations. Show all posts

Wednesday, August 3, 2022

Predictors and consequences of intellectual humility

Porter, T., Elnakouri, A., Meyers, E.A. et al.
Nat Rev Psychol (2022). 
https://doi.org/10.1038/s44159-022-00081-9

Abstract

In a time of societal acrimony, psychological scientists have turned to a possible antidote — intellectual humility. Interest in intellectual humility comes from diverse research areas, including researchers studying leadership and organizational behaviour, personality science, positive psychology, judgement and decision-making, education, culture, and intergroup and interpersonal relationships. In this Review, we synthesize empirical approaches to the study of intellectual humility. We critically examine diverse approaches to defining and measuring intellectual humility and identify the common element: a meta-cognitive ability to recognize the limitations of one’s beliefs and knowledge. After reviewing the validity of different measurement approaches, we highlight factors that influence intellectual humility, from relationship security to social coordination. Furthermore, we review empirical evidence concerning the benefits and drawbacks of intellectual humility for personal decision-making, interpersonal relationships, scientific enterprise and society writ large. We conclude by outlining initial attempts to boost intellectual humility, foreshadowing possible scalable interventions that can turn intellectual humility into a core interpersonal, institutional and cultural value.

Importance of intellectual humility

The willingness to recognize the limits of one’s knowledge and fallibility can confer societal and individual benefits, if expressed in the right moment and to the proper extent. This insight echoes the philosophical roots of intellectual humility as a virtue. State and trait intellectual humility have been associated with a range of cognitive, social and personality variables (Table 2). At the societal level, intellectual humility can promote societal cohesion by reducing group polarization and encouraging harmonious intergroup relationships. At the individual level, intellectual humility can have important consequences for wellbeing, decision-making and academic learning.

Notably, empirical research has provided little evidence regarding the generalizability of the benefits or drawbacks of intellectual humility beyond the unique contexts of WEIRD (Western, educated, industrialized, rich and democratic) societies. With this caveat, below is an initial set of findings concerning the implications of possessing high levels of intellectual humility. Unless otherwise specified, the evidence below concerns trait-level intellectual humility. After reviewing these benefits, we consider attempts to improve an individual’s intellectual humility and confer associated benefits.

Social implications

People who score higher in intellectual humility are more likely to display tolerance of opposing political and religious views, exhibit less hostility toward members of those opposing groups, and are more likely to resist derogating outgroup members as intellectually and morally bankrupt. Although intellectually humbler people are capable of intergroup prejudice, they are more willing to question themselves and to consider rival viewpoints104. Indeed, people with greater intellectual humility display less myside bias, expose themselves to opposing perspectives more often and show greater openness to befriending outgroup members on social media platforms. By comparison, people with lower intellectual humility display features of cognitive rigidity and are more likely to hold inflexible opinions and beliefs.

Thursday, June 6, 2019

A socio-historical take on the meta-problem of consciousness

Hakwan Lau and Matthias Michel
PsyArXiv Preprints
Last Edited May 21, 2019

Abstract

Whether consciousness is hard to explain depends on the notion of explanation at play. Importantly, for an explanation to be successful, it is necessary to have a correct understanding of the relevant basic empirical facts (i.e. the explanans). We review socio-historical factors that account for why, as a field, the neuroscience of consciousness has not been particularly successful at getting the basic facts right. And yet, we tend to aim for explanations of an unrealistically and unnecessarily ambitious nature. This discrepancy between ambitious notions of explanations and the relatively poor quality of explanans may account for what Chalmers calls “the meta-problem”.

The paper is here.

Thursday, February 28, 2019

Should Watson Be Consulted for a Second Opinion?

David Luxton
AMA J Ethics. 2019;21(2):E131-137.
doi: 10.1001/amajethics.2019.131.

Abstract

This article discusses ethical responsibility and legal liability issues regarding use of IBM Watson™ for clinical decision making. In a case, a patient presents with symptoms of leukemia. Benefits and limitations of using Watson or other intelligent clinical decision-making tools are considered, along with precautions that should be taken before consulting artificially intelligent systems. Guidance for health care professionals and organizations using artificially intelligent tools to diagnose and to develop treatment recommendations are also offered.

Here is an excerpt:

Understanding Watson’s Limitations

There are precautions that should be taken into consideration before consulting Watson. First, it’s important for physicians such as Dr O to understand the technical challenges of accessing quality data that the system needs to analyze in order to derive recommendations. Idiosyncrasies in patient health care record systems is one culprit, causing missing or incomplete data. If some of the data that is available to Watson is inaccurate, then it could result in diagnosis and treatment recommendations that are flawed or at least inconsistent. An advantage of using a system such as Watson, however, is that it might be able to identify inconsistencies (such as those caused by human input error) that a human might otherwise overlook. Indeed, a primary benefit of systems such as Watson is that they can discover patterns that not even human experts might be aware of, and they can do so in an automated way. This automation has the potential to reduce uncertainty and improve patient outcomes.

Thursday, September 20, 2018

Man-made human 'minibrains' spark debate on ethics and morality

Carolyn Y. Johnson
www.iol.za
Originally posted September 3, 2018

Here is an excerpt:

Five years ago, an ethical debate about organoids seemed to many scientists to be premature. The organoids were exciting because they were similar to the developing brain, and yet they were incredibly rudimentary. They were constrained in how big they could get before cells in the core started dying, because they weren't suffused with blood vessels or supplied with nutrients and oxygen by a beating heart. They lacked key cell types.

Still, there was something different about brain organoids compared with routine biomedical research. Song recalled that one of the amazing but also unsettling things about the early organoids was that they weren't as targeted to develop into specific regions of the brain, so it was possible to accidentally get retinal cells.

"It's difficult to see the eye in a dish," Song said.

Now, researchers are succeeding at keeping organoids alive for longer periods of time. At a talk, Hyun recalled one researcher joking that the lab had sung "Happy Birthday" to an organoid when it was a year old. Some researchers are implanting organoids into rodent brains, where they can stay alive longer and grow more mature. Others are building multiple organoids representing different parts of the brain, such as the hippocampus, which is involved in memory, or the cerebral cortex - the seat of cognition - and fusing them together into larger "assembloids."

Even as scientists express scepticism that brain organoids will ever come close to sentience, they're the ones calling for a broad discussion, and perhaps more oversight. The questions range from the practical to the fantastical. Should researchers make sure that people who donate their cells for organoid research are informed that they could be used to make a tiny replica of parts of their brain? If organoids became sophisticated enough, should they be granted greater protections, like the rules that govern animal research? Without a consensus on what consciousness or pain would even look like in the brain, how will scientists know when they're nearing the limit?

The info is here.

Monday, July 9, 2018

Learning from moral failure

Matthew Cashman & Fiery Cushman
In press: Becoming Someone New: Essays on Transformative Experience, Choice, and Change

Introduction

Pedagogical environments are often designed to minimize the chance of people acting wrongly; surely this is a sensible approach. But could it ever be useful to design pedagogical environments to permit, or even encourage, moral failure? If so, what are the circumstances where moral failure can be beneficial?  What types of moral failure are helpful for learning, and by what mechanisms? We consider the possibility that moral failure can be an especially effective tool in fostering learning. We also consider the obvious costs and potential risks of allowing or fostering moral failure. We conclude by suggesting research directions that would help to establish whether, when and how moral pedagogy might be facilitated by letting students learn from moral failure.

(cut)

Conclusion

Errors are an important source of learning, and educators often exploit this fact.  Failing helps to tune our sense of balance; Newtonian mechanics sticks better when we witness the failure of our folk physics. We consider the possibility that moral failure may also prompt especially strong or distinctive forms of learning.  First, and with greatest certainty, humans are designed to learn from moral failure through the feeling of guilt.  Second, and more speculatively, humans may be designed to experience moral failures by “testing limits” in a way that ultimately fosters an adaptive moral character.  Third—and highly speculatively—there may be ways to harness learning by moral failure in pedagogical contexts. Minimally, this might occur by imagination, observational learning, or the exploitation of spontaneous wrongful acts as “teachable moments”.

The book chapter is here.

Thursday, May 10, 2018

The WEIRD Science of Culture, Values, and Behavior

Kim Armstrong
Psychological Science
Originally posted April 2018

Here is an excerpt:

While the dominant norms of a society may shape our behavior, children first experience the influence of those cultural values through the attitudes and beliefs of their parents, which can significantly impact their psychological development, said Heidi Keller, a professor of psychology at the University of Osnabrueck, Germany.

Until recently, research within the field of psychology focused mainly on WEIRD (Western, educated, industrialized, rich, and democratic) populations, Keller said, limiting the understanding of the influence of culture on childhood development.

“The WEIRD group represents maximally 5% of the world’s population, but probably more than 90% of the researchers and scientists producing the knowledge that is represented in our textbooks work with participants from that particular context,” Keller explained.

Keller and colleagues’ research on the ecocultural model of development, which accounts for the interaction of socioeconomic and cultural factors throughout a child’s upbringing, explores this gap in the research by comparing the caretaking styles of rural and urban families throughout India, Cameroon, and Germany. The experiences of these groups can differ significantly from the WEIRD context, Keller notes, with rural farmers — who make up 30% to 40% of the world’s population — tending to live in extended family households while having more children at a younger age after an average of just 7 years of education.

The information is here.

Tuesday, May 8, 2018

Many People Taking Antidepressants Discover They Cannot Quit

Benedict Carey & Robert Gebeloff
The New York Times
Originally posted April 7, 2018

Here is an excerpt:

Dr. Peter Kramer, a psychiatrist and author of several books about antidepressants, said that while he generally works to wean patients with mild-to-moderate depression off medication, some report that they do better on it.

“There is a cultural question here, which is how much depression should people have to live with when we have these treatments that give so many a better quality of life,” Dr. Kramer said. “I don’t think that’s a question that should be decided in advance.”

Antidepressants are not harmless; they commonly cause emotional numbing, sexual problems like a lack of desire or erectile dysfunction and weight gain. Long-term users report in interviews a creeping unease that is hard to measure: Daily pill-popping leaves them doubting their own resilience, they say.

“We’ve come to a place, at least in the West, where it seems every other person is depressed and on medication,” said Edward Shorter, a historian of psychiatry at the University of Toronto. “You do have to wonder what that says about our culture.”

Patients who try to stop taking the drugs often say they cannot. In a recent survey of 250 long-term users of psychiatric drugs — most commonly antidepressants — about half who wound down their prescriptions rated the withdrawal as severe. Nearly half who tried to quit could not do so because of these symptoms.

In another study of 180 longtime antidepressant users, withdrawal symptoms were reported by more than 130. Almost half said they felt addicted to antidepressants.

The information is here.

Wednesday, April 4, 2018

Musk and Zuckerberg are fighting over whether we rule technology—or it rules us

Michael Coren
Quartz.com
Originally posted April 1, 2018

Here is an excerpt:

Musk wants to rein in AI, which he calls “a fundamental risk to the existence of human civilization.” Zuckerberg has dismissed such views calling their proponents “naysayers.” During a Facebook live stream last July, he added, “In some ways I actually think it is pretty irresponsible.” Musk was quick to retort on Twitter. “I’ve talked to Mark about this,” he wrote. “His understanding of the subject is limited.”

Both men’s views on the risks and rewards of technology are embodied in their respective companies. Zuckerberg has famously embraced the motto “Move fast and break things.” That served Facebook well as it exploded from a college campus experiment in 2004 to an aggregator of the internet for more than 2 billion users.

Facebook has treated the world as an infinite experiment, a game of low-stakes, high-volume tests that reliably generate profits, if not always progress. Zuckerberg’s main concern has been to deliver the fruits of digital technology to as many people as possible, as soon as possible. “I have pretty strong opinions on this,” Zuckerberg has said. “I am optimistic. I think you can build things and the world gets better.”

The information is here.

Tuesday, April 3, 2018

AI Has a Hallucination Problem That's Proving Tough to Fix

Tom Simonite
wired.com
Originally posted March 9, 2018

Tech companies are rushing to infuse everything with artificial intelligence, driven by big leaps in the power of machine learning software. But the deep-neural-network software fueling the excitement has a troubling weakness: Making subtle changes to images, text, or audio can fool these systems into perceiving things that aren’t there.

That could be a big problem for products dependent on machine learning, particularly for vision, such as self-driving cars. Leading researchers are trying to develop defenses against such attacks—but that’s proving to be a challenge.

Case in point: In January, a leading machine-learning conference announced that it had selected 11 new papers to be presented in April that propose ways to defend or detect such adversarial attacks. Just three days later, first-year MIT grad student Anish Athalye threw up a webpage claiming to have “broken” seven of the new papers, including from boldface institutions such as Google, Amazon, and Stanford. “A creative attacker can still get around all these defenses,” says Athalye. He worked on the project with Nicholas Carlini and David Wagner, a grad student and professor, respectively, at Berkeley.

That project has led to some academic back-and-forth over certain details of the trio’s claims. But there’s little dispute about one message of the findings: It’s not clear how to protect the deep neural networks fueling innovations in consumer gadgets and automated driving from sabotage by hallucination. “All these systems are vulnerable,” says Battista Biggio, an assistant professor at the University of Cagliari, Italy, who has pondered machine learning security for about a decade, and wasn’t involved in the study. “The machine learning community is lacking a methodological approach to evaluate security.”

The article is here.

Tuesday, May 16, 2017

Why are we reluctant to trust robots?

Jim Everett, David Pizarro and Molly Crockett
The Guardian
Originally posted April 27, 2017

Technologies built on artificial intelligence are revolutionising human life. As these machines become increasingly integrated in our daily lives, the decisions they face will go beyond the merely pragmatic, and extend into the ethical. When faced with an unavoidable accident, should a self-driving car protect its passengers or seek to minimise overall lives lost? Should a drone strike a group of terrorists planning an attack, even if civilian casualties will occur? As artificially intelligent machines become more autonomous, these questions are impossible to ignore.

There are good arguments for why some ethical decisions ought to be left to computers—unlike human beings, machines are not led astray by cognitive biases, do not experience fatigue, and do not feel hatred toward an enemy. An ethical AI could, in principle, be programmed to reflect the values and rules of an ideal moral agent. And free from human limitations, such machines could even be said to make better moral decisions than us. Yet the notion that a machine might be given free reign over moral decision-making seems distressing to many—so much so that, for some, their use poses a fundamental threat to human dignity. Why are we so reluctant to trust machines when it comes to making moral decisions? Psychology research provides a clue: we seem to have a fundamental mistrust of individuals who make moral decisions by calculating costs and benefits – like computers do.

The article is here.

Monday, December 5, 2016

Why Some People Get Burned Out and Others Don't

Kandi Wiens and Annie McKee
Harvard Business Review
Originally posted November 23, 2016

Here is an excerpt:

What You Can Do to Manage Stress and Avoid Burnout

People do all kinds of destructive things to deal with stress—they overeat, abuse drugs and alcohol, and push harder rather than slowing down. What we learned from our study of chief medical officers is that people can leverage their emotional intelligence to deal with stress and ward off burnout. You, too, might want to try the following:

Don’t be the source of your stress. Too many of us create our own stress, with its full bodily response, merely by thinking about or anticipating future episodes or encounters that might be stressful. People who have a high need to achieve or perfectionist tendencies may be more prone to creating their own stress. We learned from our study that leaders who are attuned to the pressures they put on themselves are better able to control their stress level. As one CMO described, “I’ve realized that much of my stress is self-inflicted from years of being hard on myself. Now that I know the problems it causes for me, I can talk myself out of the non-stop pressure.”

Recognize your limitations. Becoming more aware of your strengths and weaknesses will clue you in to where you need help. In our study, CMOs described the transition from a clinician to leadership role as being a major source of their stress. Those who recognized when the demands were outweighing their abilities, didn’t go it alone—they surrounded themselves with trusted advisors and asked for help.

The article is here.

Sunday, February 23, 2014

Can We Resolve Quantum Paradoxes by Stepping Out of Space and Time?

By George Musser
Scientific American Blog
Originally posted June 21, 2013

Here is an excerpt:

As is evident from von Baeyer’s article, quantum theory truly challenges us to think outside the box—and, in this case, I submit that the box is spacetime itself. If this seems farfetched, consider the eloquent point made by physicist and philosopher Ernan McMullin:

“Imaginability must not be made the test for ontology. The realist claim is that the scientist is discovering the structures of the world; it is not required in addition that these structures be imaginable in the categories of the macroworld.”

Only if we face the strange non-classical features of the physical world head-on can we have a physical, non-observer-dependent account of our reality that solves longstanding puzzles such as the problem of Schrödinger’s Cat.

The entire blog post is here.