Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Expertise. Show all posts
Showing posts with label Expertise. Show all posts

Sunday, April 21, 2024

An Expert Who Has Testified in Foster Care Cases Across Colorado Admits Her Evaluations Are Unscientific

Eli Hager
Originally posted 18 March 24

Diane Baird had spent four decades evaluating the relationships of poor families with their children. But last May, in a downtown Denver conference room, with lawyers surrounding her and a court reporter transcribing, she was the one under the microscope.

Baird, a social worker and professional expert witness, has routinely advocated in juvenile court cases across Colorado that foster children be adopted by or remain in the custody of their foster parents rather than being reunified with their typically lower-income birth parents or other family members.

In the conference room, Baird was questioned for nine hours by a lawyer representing a birth family in a case out of rural Huerfano County, according to a recently released transcript of the deposition obtained by ProPublica.

Was Baird’s method for evaluating these foster and birth families empirically tested? No, Baird answered: Her method is unpublished and unstandardized, and has remained “pretty much unchanged” since the 1980s. It doesn’t have those “standard validity and reliability things,” she admitted. “It’s not a scientific instrument.”

Who hired and was paying her in the case that she was being deposed about? The foster parents, she answered. They wanted to adopt, she said, and had heard about her from other foster parents.

Had she considered or was she even aware of the cultural background of the birth family and child whom she was recommending permanently separating? (The case involved a baby girl of multiracial heritage.) Baird answered that babies have “never possessed” a cultural identity, and therefore are “not losing anything,” at their age, by being adopted. Although when such children grow up, she acknowledged, they might say to their now-adoptive parents, “Oh, I didn’t know we were related to the, you know, Pima tribe in northern California, or whatever the circumstances are.”

The Pima tribe is located in the Phoenix metropolitan area.


Here is my summary:

The article discusses Diane Baird, an expert who has testified in foster care cases across Colorado, admitting that her evaluations are unscientific. Baird, who has spent four decades evaluating the relationships of poor families with their children, labeled her method for assessing families as the "Kempe Protocol." This revelation raises concerns about the validity of her evaluations in foster care cases and highlights the need for more rigorous and scientific approaches in such critical assessments.

Sunday, January 1, 2023

The Central Role of Lifelong Learning & Humility in Clinical Psychology

Washburn, J. J., Teachman, B. A., et al. 
(2022). Clinical Psychological Science, 0(0).
https://doi.org/10.1177/21677026221101063

Abstract

Lifelong learning plays a central role in the lives of clinical psychologists. As psychological science advances and evidence-based practices develop, it is critical for clinical psychologists to not only maintain their competencies but to also evolve them. In this article, we discuss lifelong learning as a clinical, ethical, and scientific imperative in the myriad dimensions of the clinical psychologist’s professional life, arguing that experience alone is not sufficient. Attitude is also important in lifelong learning, and we call for clinical psychologists to adopt an intellectually humble stance and embrace “a beginner’s mind” when approaching new knowledge and skills. We further argue that clinical psychologists must maintain and refresh their critical-thinking skills and seek to minimize their biases, especially as they approach the challenges and opportunities of lifelong learning. We intend for this article to encourage psychologists to think differently about how they approach lifelong learning.

Here is an excerpt:

Schwartz (2008) was specifically referencing the importance of teaching graduate students to embrace what they do not know, viewing it as an opportunity instead of a threat. The same is true, perhaps even more so, for psychologists engaging in lifelong learning.

As psychologists progress in their careers, they are told repeatedly that they are experts in their field and sometimes THE expert in their own tiny subfield. Psychologists spend their days teaching others what they know and advising students how to make their own discoveries. But expertise is a double-edged sword. Of course, it serves psychologists well in that they are less likely to repeat past mistakes, but it is a disadvantage if they become too comfortable in their expert role. The Egyptian mathematician, Ptolemy, devised a system based on the notion that the sun revolved around the earth that guided astronomers for centuries until Copernicus proved him wrong. Although Newton devised the laws of physics, Einstein showed that the principles of Newtonian physics were wholly bound by context and only “right” within certain constraints. Science is inherently self-correcting, and the only thing that one can count on is that most of what people believe today will be shown to be wrong in the not-too-distant future. One of the authors (S. D. Hollon) recalls that the two things that he knew for sure coming out of graduate school was that neural tissues do not regenerate and that you cannot inherit acquired characteristics. It turns out that both are wrong. Lifelong learning and the science it is based on require psychologists to continuously challenge their expertise. Before becoming experts, psychologists often experience impostor phenomenon during education and training (Rokach & Boulazreg, 2020). Embracing the self-doubt that comes with feeling like an impostor can motivate lifelong learning, even for areas in which one feels like an expert. This means not only constantly learning about new topics but also recognizing that as psychologists tackle tough problems and their associated research questions, complex and often interdisciplinary approaches are required to develop meaningful answers. It is neither feasible nor desirable to become an expert in all domains. This means that psychologists need to routinely surround themselves with people who make them question or expand their expertise.

Here is the conclusion:

Lifelong learning should, like doctoral programs in clinical psychology, concentrate much more on thinking than training. Lifelong learning must encourage critical and independent thinking in the process of mastering relevant bodies of knowledge and the development of specific skills. Specifically, lifelong learning must reinforce the need for clinical psychologists to reflect carefully and critically on what they read, hear, and say and to think abstractly. Such abstract thinking is as relevant after one’s graduate career as before.

Wednesday, December 21, 2022

Do You Really Want to Read What Your Doctor Writes About You?

Zoya Qureshi
The Atlantic
Originally posted 15 NOV 22

You may not be aware of this, but you can read everything that your doctor writes about you. Go to your patient portal online, click around until you land on notes from your past visits, and read away. This is a recent development, and a big one. Previously, you always had the right to request your medical record from your care providers—an often expensive and sometimes fruitless process—but in April 2021, a new federal rule went into effect, mandating that patients have the legal right to freely and electronically access most kinds of notes written about them by their doctors.

If you’ve never heard of “open notes,” as this new law is informally called, you’re not the only one. Doctors say that the majority of their patients have no clue. (This certainly has been the case for all of the friends and family I’ve asked.) If you do know about the law, you likely know a lot about it. That’s typically because you’re a doctor—one who now has to navigate a new era of transparency in medicine—or you’re someone who knows a doctor, or you’re a patient who has become intricately familiar with this country’s health system for one reason or another.

When open notes went into effect, the change was lauded by advocates as part of a greater push toward patient autonomy and away from medical gatekeeping. Previously, hospitals could charge up to hundreds of dollars to release records, if they released them at all. Many doctors, meanwhile, have been far from thrilled about open notes. They’ve argued that this rule will introduce more challenges than benefits for both patients and themselves. At worst, some have fretted, the law will damage people’s trust of doctors and make everyone’s lives worse.

A year and a half in, however, open notes don’t seem to have done too much of anything. So far, they have neither revolutionized patient care nor sunk America’s medical establishment. Instead, doctors say, open notes have barely shifted the clinical experience at all. Few individual practitioners have been advertising the change, and few patients are seeking it out on their own. We’ve been left with a partially implemented system and a big unresolved question: How much, really, should you want to read what your doctor is writing about you?

(cut)

Open notes are only part of this conversation. The new law also requires that test results be made immediately available to patients, meaning that patients might see their health information before their physician does. Although this is fine for the majority of tests, problems arise when results are harbingers of more complex, or just bad, news. Doctors I spoke with shared that some of their patients have suffered trauma from learning about their melanoma or pancreatic cancer or their child’s leukemia from an electronic message in the middle of the night, with no doctor to call and talk through the seriousness of that result with. This was the case for Tara Daniels, a digital-marketing consultant who lives near Boston. She’s had leukemia three times, and learned about the third via a late-night notification from her patient portal. Daniels appreciates the convenience of open notes, which help her keep track of her interactions with various doctors. But, she told me, when it comes to instant results, “I still hold a lot of resentment over the fact that I found out from test results, that I had to figure it out myself, before my doctor was able to tell me.”

Tuesday, December 6, 2022

Countering cognitive biases on experts’ objectivity in court

Kathryn A. LaFortune
Monitor on Psychology
Vol. 53 No. 6
Print version: page 47

Mental health professionals’ opinions can be extremely influential in legal proceedings. Yet, current research is inconclusive about the effects of various cognitive biases on experts’ objectivity when making forensic mental health judgments and which biases most influence these decisions, according to a 2022 study in Law and Human Behavior by psychologists Tess Neal, Pascal Lienert, Emily Denne, and Jay Singh (Vol. 46, No. 2, 2022). The study also pointed to the need for more research on which debiasing strategies effectively counter bias in forensic mental health decisions and whether there should be specific policies and procedures to address these unique aspects of forensic work in mental health.

In the study, researchers conducted a systematic review of the relevant literature in forensic mental health decision-making. “Bias” was not generally defined in most of the available studies reviewed in the context of researching forensic mental health judgments. Their study noted that only a few forms of bias have been explored as they pertain specifically to forensic mental health professionals’ opinions. Adversarial allegiance, confirmation bias, hindsight bias, and bias blind spot have not been rigorously studied for potential negative effects on forensic mental health expert opinions across different contexts.

The importance of addressing these concerns is heightened when considering APA’s Ethics Code provisions that require psychologists to decline a professional role if bias may diminish their objectivity (See, Ethical Principles of Psychologists and Code of Conduct, Section 3.06). Similarly, the Specialty Guidelines for Forensic Psychologists advises forensic practitioners to decline participation in cases when potential biases may impact their impartiality or to take steps to correct or limit the effects of the bias (Section 2.07). That said, unlike in other professions where tasks are often repetitive, decision-making in the field of forensic psychology is impacted by the unique nature of the various referrals that forensic psychologists receive, making it even more difficult to expect them to consider and correct how their culture, attitudes, values, beliefs, and biases might affect their work. They engage in greater subjectivity in selecting assessment tools from a large array of available tests, none of which are uniformly adopted in cases, in part because of the wide range of questions experts often must answer to assist the court and the current lack of standardized methods. Neither do experts typically receive immediate feedback on their opinions. This study also noted that the only debiasing strategy shown to be effective for forensic psychologists was to “consider the opposite,” in which experts ask themselves why their opinions might be wrong and what alternatives they may have considered.

Thursday, October 6, 2022

Defining Their Own Ethics, Online Creators Are De Facto Therapists for Millions—Explosive Demand & Few Safeguards

Tantum Hunter
The Washington Post
Originally posted 29 AUG 22

Here are two excerpts:

In real life, mental health information and care are sparse. In the United States, 1 in 3 counties do not have a single licensed psychologist, according to the American Psychological Association, and Americans say cost is a top barrier to seeking mental health help. On the internet, however, mental health tips are everywhere: TikTok videos with #mentalhealth in the caption have earned more than 43.9 billion views, according to the analytics company Sprout Social, and mentions of mental health on social media are increasing year by year.

The growing popularity of the subject means that creators of mental health content are filling a health-care gap. But social media apps are not designed to prioritize accurate, helpful information, critics say, just whatever content draws the biggest reaction. Young people could see their deepest struggles become fodder for advertisers and self-promoters. With no road map even for licensed professionals, mental health creators are defining their own ethics.

“I don’t want to give anyone the wrong advice,” Moloney says. “I’ve met some [followers] who’ve just started crying and saying ‘thank you’ and stuff like that. Even though it seems small, to someone else, it can have a really big impact.”

As rates of depression and anxiety spiked during the pandemic and options for accessible care dwindled, creators shared an array of content including first-person accounts of life with mental illness and videos listing symptoms of bipolar disorder. In many cases, their follower counts ballooned.

(cut)

Ideally, social media apps should be one item in a collection of mental health resources, said Jodi Miller, a researcher at Johns Hopkins University School of Education who studies the relationships among young people, technology and stress.

“Young people need evidence-based sources of information outside the internet, from parents and schools,” Miller said.

Often, those resources are unavailable. So it’s up to consumers to decide what mental health advice they put stock in, Fisher-Quann said. For her, condescending health-care providers and the warped incentives of social media platforms haven’t made that easy. But she thinks she can get better — and that her followers can, too.

“It all has to come from a place of self-awareness and desire to get better. Communities can be extremely helpful for that, but they can also be extremely harmful for that,” she said.

Thursday, September 8, 2022

Knowledge overconfidence is associated with anti-consensus views on controversial scientific issues

Light, N. et al. 
Science Advances, 20 Jul 2022
Vol 8, Issue 29
DOI: 10.1126/sciadv.abo0038

Abstract

Public attitudes that are in opposition to scientific consensus can be disastrous and include rejection of vaccines and opposition to climate change mitigation policies. Five studies examine the interrelationships between opposition to expert consensus on controversial scientific issues, how much people actually know about these issues, and how much they think they know. Across seven critical issues that enjoy substantial scientific consensus, as well as attitudes toward COVID-19 vaccines and mitigation measures like mask wearing and social distancing, results indicate that those with the highest levels of opposition have the lowest levels of objective knowledge but the highest levels of subjective knowledge. Implications for scientists, policymakers, and science communicators are discussed.

Discussion

Results from five studies show that the people who disagree most with the scientific consensus know less about the relevant issues, but they think they know more. These results suggest that this phenomenon is fairly general, although the relationships were weaker for some more polarized issues, particularly climate change. It is important to note that we document larger mismatches between subjective and objective knowledge among participants who are more opposed to the scientific consensus. Thus, although broadly consistent with the Dunning-Kruger effect and other research on knowledge miscalibration, our findings represent a pattern of relationships that goes beyond overconfidence among the least knowledgeable. However, the data are correlational, and the normal caveats apply.

A strength of these studies is the consistency of the main result across the overall models in studies 1 to 3 and specific (but different) instantiations of anti-consensus attitudes about COVID-19 in studies 4 and 5. Additional strengths are that study 5 is a conceptual replication of study 4 (and studies 1 to 3 more generally) using different measures and operationalizations of the main constructs, conducted by an initially independent group of researchers (with each group unaware of the research of the other during study development and data collection). The final two studies were also collected approximately 2 months apart, in July and September 2020, respectively. These two collection periods reflect the dynamic nature of the COVID-19 pandemic in the United States, with cases in July trending upward and cases in September flat or trending downward. The consistency of our effects across these 2 months suggests that the pattern of results is fairly robust.

One possible interpretation of these relationships is that the people who appear to be overconfident in their knowledge and extreme in their opposition to the consensus are actually reporting their sense of understanding for a set of incorrect alternative facts not those of the scientific community. After all, nonscientific explanations and theories tend to be much simpler and less mechanistic than scientific ones.  As a result, participants could be reporting higher levels of understanding for what are, in fact, simpler interpretations. However, we believe that several elements of this research speak against this interpretation fully explaining the results. First, the battery of objective knowledge questions is sufficiently broad, simple, and removed (at first glance) from the corresponding scientific issues. For example, not knowing that “the skin is the largest organ in the human body” does not suggest that participants hold alternative views about how the human body works; it suggests the lack of real knowledge about the body. We also believe that it does not cue participants to the fact that the question is related to vaccination. 

Friday, June 17, 2022

Capable but Amoral? Comparing AI and Human Expert Collaboration in Ethical Decision Making

S. Tolmeijer, M. Christen, et al.
In CHI Conference on Human Factors in 
Computing Systems (CHI '22), April 29-May 5,
2022, New Orleans, LA, USA. ACM

While artificial intelligence (AI) is increasingly applied for decision-making processes, ethical decisions pose challenges for AI applications. Given that humans cannot always agree on the right thing to do, how would ethical decision-making by AI systems be perceived and how would responsibility be ascribed in human-AI collaboration? In this study, we investigate how the expert type (human vs. AI) and level of expert autonomy (adviser vs. decider) influence trust, perceived responsibility, and reliance. We find that participants consider humans to be more morally trustworthy but less capable than their AI equivalent. This shows in participants’ reliance on AI: AI recommendations and decisions are accepted more often than the human expert's. However, AI team experts are perceived to be less responsible than humans, while programmers and sellers of AI systems are deemed partially responsible instead.

From the Discussion Section

Design implications for ethical AI

In sum, we find that participants had slightly higher moral trust and more responsibility ascription towards human experts, but higher capacity trust, overall trust, and reliance on AI. These different perceived capabilities could be combined in some form of human-AI collaboration. However, lack of responsibility of the AI can be a problem when AI for ethical decision making is implemented. When a human expert is involved but has less autonomy, they risk becoming a scapegoat for the decisions that the AI proposed in case of negative outcomes.

At the same time, we find that the different levels of autonomy, i.e., the human-in-the-loop and human-on-the-loop setting, did not influence the trust people had, the responsibility they assigned (both to themselves and the respective experts), and the reliance they displayed. A large part of the discussion on usage of AI has focused on control and the level of autonomy that the AI gets for different tasks. However, our results suggest that this has less of an influence, as long a human is appointed to be responsible in the end. Instead, an important focus of designing AI for ethical decision making should be on the different types of trust users show for a human vs. AI expert.

One conclusion of this finding that the control conditions of AI may be of less relevance than expected is that the focus on human-AI collaboration should be less on control and more on how the involvement of AI improves human ethical decision making. An important factor in that respect will be the time available for actual decision making: if time is short, AI advice or decisions should make clear which value was guiding in the decision process (e.g., maximizing the expected number of people to be saved irrespective of any characteristics of the individuals involved), such that the human decider can make (or evaluate) the decision in an ethically informed way. If time for deliberation is available, a AI decision support system could be designed in a way to counteract human biases in ethical decision making (e.g., point to the possibility that human deciders solely focus on utility maximization and in this way neglecting fundamental rights of individuals) such that those biases can become part of the deliberation process.

Sunday, March 20, 2022

The prejudices of expert evidence

Chin, J., Cullen, H. J., & Clarke, B. 
(2022, February 14).
https://doi.org/10.31222/osf.io/nxcvy

Abstract

The rules and procedures regulating the admission of potentially unreliable expert evidence have been substantially weakened over the past several years. We respond to this trend by focusing on one aspect of the rules that has not been explicitly curtailed: unfair prejudice. Unfair prejudice is an important component of trial judges’ authority to exclude evidence, which they may do when that unfair prejudice outweighs the evidence’s probative value. We develop the concept of unfair prejudice by first examining how it has been interpreted by judges and then relating that to the relevant social scientific research on the characteristics of expertise that can make it prejudicial. In doing so, we also discuss the research behind a common reason that judges admit expert evidence despite its prejudice, which is that judicial directions help jurors understand and weigh it. As a result, this article provides two main contributions. First, it advances knowledge about unfair prejudice, which is an important part of expert evidence law that has received relatively little attention from legal researchers. Second, it provides guidance to practitioners for challenging expert evidence under one of the few avenues left to do so.

(cut)

What should courts do about the prejudices of expert evidence?

While we recognise that balancing probative value with unfair prejudice is fact-specific and contextual, the analysis above suggests considerable room for improvement in how courts assess putatively prejudicial expert evidence. Specifically, the research we reviewed indicates that courts do not fully appreciate the degree to which laypeople may overestimate the reliability of scientific claims. But, more than that, the judicial approach has been myopically focused on the CSI Effect (and in at least one case, significantly misconstrued it), rather than other well-researched expert evidence stereotypes and misconceptions. Accordingly, we recommend that judges apply the discretions to exclude evidence in sections 135 and 137 of the UEL in a way that is more sensitive to empirical research. For example, courts should recognise that experts, or counsel that emphasise the expert’s status and years of experience, also feed into that evidence’s prejudicial potential. Moreover, technical jargon and the general complexity of the evidence can serve to heighten that prejudice, such that these features of expert evidence may build upon each other in a way that is more than additive.

The expert evidence jurisprudence is even more insensitive to research on the factors that make evidence difficult or impossible to test. For example, we struggled (as others have) to find decisions acknowledging that unconscious cognitive processes and associated biases invite prejudice because the unconscious is difficult to cross-examine. Moreover, the closest decision we could find acknowledging adversarial imbalance as a limit on adversarial testing was a US decision in obiter.  And troublingly, courts sometimes simply mistake previously admitted evidence with evidence that has been adversarially tested. With evidence that defies testing, the first step for courts is to acknowledge this research on prejudice and incorporate it into the exclusionary calculus in sections 135 and 137. The next step, as we will see in the following part, is to use this knowledge to better understand the limitations of judicial directions aimed at mitigating prejudice – and perhaps craft better directions in the future. 

Thursday, December 16, 2021

The hidden ‘replication crisis’ of finance

Robin Wigglesworth 
Financial Times
Originally published 15 NOV 2021

Here is an excerpt:

Is investing suffering from something similar?

That is the incendiary argument of Campbell Harvey, professor of finance at Duke university. He reckons that at least half of the 400 supposedly market-beating strategies identified in top financial journals over the years are bogus. Worse, he worries that many fellow academics are in denial about this.

“It’s a huge issue,” he says. “Step one in dealing with the replication crisis in finance is to accept that there is a crisis. And right now, many of my colleagues are not there yet.”

Harvey is not some obscure outsider or performative contrarian attempting to gain attention through needless controversy. He is the former editor of the Journal of Finance, a former president of the American Finance Association, and an adviser to investment firms like Research Affiliates and Man Group.

(cut)

Obviously, the stakes of the replication crisis are much higher in medicine, where lives can be in play. But it is not something that remains confined to the ivory towers of business schools, as investment groups often smell an opportunity to sell products based on apparently market-beating factors, Harvey argues. “It filters into the real world,” he says. “It definitely makes it into people’s portfolios.”


Monday, July 26, 2021

Do doctors engaging in advocacy speak for themselves or their profession?

Elizabeth Lanphier
Journal of Medical Ethics Blog
Originally posted 17 June 21

Here is an excerpt:

My concern is not the claim that expertise should be shared. (It should!) Nor do I think there is any neat distinction between physician responsibilities for individual health and public health. But I worry that when Strous and Karni alternately frame physician duties to “speak out” as individual duties and collective ones, they collapse necessary distinctions between the risks, benefits, and demands of these two types of obligations.

Many of us have various role-based individual responsibilities. We can have obligations as a parent, as a citizen, or as a professional. Having an individual responsibility as a physician involves duties to your patients, but also general duties to care in the event you are in a situation in which your expertise is needed (the “is there a doctor on this flight?” scenario).

Collective responsibility, on the other hand, is when a group has a responsibility as a group. The philosophical literature debates hard to resolve questions about what it means to be a “group,” and how groups come to have or discharge responsibilities. Collective responsibility raises complicated questions like: If physicians have a collective responsibility to speak out during the COVID-19 pandemic, does every physician has such an obligation? Does any individual physician?

Because individual obligations attribute duties to specific persons responsible for carrying them out in ways collective duties tend not to, I why individual physician obligations are attractive. But this comes with risks. One risk is that a physician speaks out as an individual, appealing to the authority of their medical credentials, but not in alignment with their profession.

In my essay I describe a family physician inviting his extended family for a holiday meal during a peak period of SARS-CoV-2 transmission because he didn’t think COVID-19 was a “big deal.”

More infamously, Dr. Scott Atlas served as Donald J. Trump’s coronavirus advisor, and although he is a physician, he did not have experience in public health, infectious disease, or critical care medicine applicable to COVID-19. Atlas was a physician speaking as a physician, but he routinely promoted views starkly different than those of physicians with expertise relevant to the pandemic, and the guidance coming from scientific and medical communities.

Sunday, July 11, 2021

It just feels right: an account of expert intuition

Fridland, E., & Stichter, M. 
Synthese (2020). 
https://doi.org/10.1007/s11229-020-02796-9

Abstract

One of the hallmarks of virtue is reliably acting well. Such reliable success presupposes that an agent (1) is able to recognize the morally salient features of a situation, and the appropriate response to those features and (2) is motivated to act on this knowledge without internal conflict. Furthermore, it is often claimed that the virtuous person can do this (3) in a spontaneous or intuitive manner. While these claims represent an ideal of what it is to have a virtue, it is less clear how to make good on them. That is, how is it actually possible to spontaneously and reliably act well? In this paper, we will lay out a framework for understanding how it is that one could reliably act well in an intuitive manner. We will do this by developing the concept of an action schema, which draws on the philosophical and psychological literature on skill acquisition and self-regulation. In short, we will give an account of how self-regulation, grounded in skillful structures, can allow for the accurate intuitions and flexible expertise required for virtue. While our primary goal in this paper is to provide a positive theory of how virtuous intuitions might be accounted for, we also take ourselves to be raising the bar for what counts as an explanation of reliable and intuitive action in general.

Conclusion

By thinking of skill and expertise as sophisticated forms of self-regulation, we are able to get a handle on intuition, generally, and on the ways in which reliably accurate intuition may develop in virtue, specifically. This gives us a way of explaining both the accuracy and immediacy of the virtuous person’s perception and intuitive responsiveness to a situation and it also gives us further reason to prefer a virtue as skill account of virtue. Moreover, such an approach gives us the resources to explain with some rigor and precision, the ways in which expert intuition can be accounted for, by appeal to action schemas. Lastly, our approach provides reason to think that expert intuition in the realm of virtue can indeed develop over time and with practice in a way that is flexible, controlled and intelligent. It lends credence to the view that virtue is learned and that we can act reliably and well by grounding our actions in expert intuition.

Thursday, February 18, 2021

Intuitive Expertise in Moral Judgements.

Wiegmann, A., & Horvath, J. 
(2020, December 22). 

Abstract

According to the ‘expertise defence’, experimental findings which suggest that intuitive judgements about hypothetical cases are influenced by philosophically irrelevant factors do not undermine their evidential use in (moral) philosophy. This defence assumes that philosophical experts are unlikely to be influenced by irrelevant factors. We discuss relevant findings from experimental metaphilosophy that largely tell against this assumption. To advance the debate, we present the most comprehensive experimental study of intuitive expertise in ethics to date, which tests five well-known biases of judgement and decision-making among expert ethicists and laypeople. We found that even expert ethicists are affected by some of these biases, but also that they enjoy a slight advantage over laypeople in some cases. We discuss the implications of these results for the expertise defence, and conclude that they still do not support the defence as it is typically presented in (moral) philosophy.

Conclusion

We first considered the experimental restrictionist challenge to intuitions about cases, with a special focus on moral philosophy, and then introduced the expertise defence as the most popular reply. The expertise defence makes the empirically testable assumption that the case intuitions of expert philosophers are significantly less influenced by philosophically irrelevant factors than those of laypeople.  The upshot of our discussion of relevant findings from experimental metaphilosophy was twofold: first, extant findings largely tell against the expertise defence, and second, the number of published studies and investigated biases is still fairly small. To advance the debate about the expertise defencein moral philosophy, we thus tested five well-known biases of judgement and decision-making among expert ethicists and laypeople. Averaged across all biases and scenarios, the intuitive judgements of both experts and laypeople were clearly susceptible to bias. However, moral philosophers were also less biased in two of the five cases(Focus and Prospect), although we found no significant expert-lay differences in the remaining three cases.

In comparison to previous findings (for example Schwitzgebel and Cushman [2012, 2015]; Wiegmann et al. [2020]), our results appear to be relatively good news for the expertise defence, because they suggest that moral philosophers are less influenced by some morally irrelevant factors, such as a simple saving/killing framing. On the other hand, our study does not support the very general armchair versions of the expertise defence that one often finds in metaphilosophy, which try to reassure(moral) philosophers that they need not worry about the influence of philosophically irrelevant factors.At best, however, we need not worry about just a few cases and a few human biases—and even that modest hypothesis can only be upheld on the basis of sufficient empirical research.

Tuesday, November 24, 2020

How to know who’s trustworthy

T. Ryan Byerly
psyche.co
Originally posted 4 Nov 2020

Here is an excerpt:

An interesting fact about the virtues of intellectual dependability is that they are both intellectual and moral virtues. They’re ‘intellectual’ in the sense that they’re concerned with intellectual goods such as knowledge and understanding; but they’re moral virtues too, because they’re concerned with the intellectual goods of others. Indeed, the moral, other-regarding features of these virtues are especially central in a way that’s different to other intellectual virtues, such as inquisitiveness or intellectual perseverance.

It is in part because of the centrality of their other-regarding dimensions that the virtues of intellectual dependability haven’t taken on a larger role in education. The reigning paradigm of what we should aim for in education is that of the critical thinker. But being a critical thinker doesn’t necessarily mean that you possess other-regarding qualities, such as the virtues of intellectual dependability.

While we might lament this fact when it comes to formal education, we can still make efforts to become more intellectually dependable on our own. And we arguably should try to do so. After all, it’s not just us who are in need of dependable guides in our networks – we need to be intellectually dependable for the sake others, too.

If we want to grow in these virtues of intellectual dependability – to become more benevolent, transparent and so on – what can we do? The following are four strategies that researchers tend to agree can help us grow in intellectual virtue.

A first strategy is direct instruction – learning about the nature of particular intellectual virtues that we hope to cultivate. Ideally, we’ll gain an account of what the virtue involves, and we might learn about the vices that oppose it. Part of the reason why direct instruction is important is that it helps to reduce our cognitive load. It gives us a framework to think through our intellectual life. It also helps us set a target to aim for.

A second strategy is to think how intellectual virtues apply in particular situations, considering what the intellectual virtue – and perhaps also its opposing vices – looks like in action. You might select some historical, contemporary or even fictional examples of people who appear to act in accordance with the virtue or its opposing vice. By encountering exemplars, you might gain a taste or sensibility for the virtue, and a person to emulate. More generally, this exercise can help you to practise evaluating scenarios in which intellectual virtues can influence behaviour. When done well, this can help you appreciate the variety of contexts in which intellectual virtues make a difference, and the different kinds of behaviour they lead to.

Saturday, August 8, 2020

How behavioural sciences can promote truth, autonomy and democratic discourse online

Lorenz-Spreen, P., Lewandowsky,
S., Sunstein, C.R. et al.
Nat Hum Behav (2020).
https://doi.org/10.1038/s41562-020-0889-7

Abstract

Public opinion is shaped in significant part by online content, spread via social media and curated algorithmically. The current online ecosystem has been designed predominantly to capture user attention rather than to promote deliberate cognition and autonomous choice; information overload, finely tuned personalization and distorted social cues, in turn, pave the way for manipulation and the spread of false information. How can transparency and autonomy be promoted instead, thus fostering the positive potential of the web? Effective web governance informed by behavioural research is critically needed to empower individuals online. We identify technologically available yet largely untapped cues that can be harnessed to indicate the epistemic quality of online content, the factors underlying algorithmic decisions and the degree of consensus in online debates. We then map out two classes of behavioural interventions—nudging and boosting— that enlist these cues to redesign online environments for informed and autonomous choice.

Here is an excerpt:

Another competence that could be boosted to help users deal more expertly with information they encounter online is the ability to make inferences about the reliability of information based on the social context from which it originates. The structure and details of the entire cascade of individuals who have previously shared an article on social media has been shown to serve as proxies for epistemic quality. More specifically, the sharing cascade contains metrics such as the depth and breadth of dissemination by others, with deep and narrow cascades indicating extreme or niche topics and breadth indicating widely discussed issues. A boosting intervention could provide this information (Fig. 3a) to display the full history of a post, including the original source, the friends and public users who disseminated it, and the timing of the process (showing, for example, if the information is old news that has been repeatedly and artificially amplified). Cascade statistics teaches concepts that may take some practice to read and interpret, and one may need to experience a number of cascades to learn to recognize informative patterns.

Tuesday, April 28, 2020

Athletes often don’t know what they’re talking about (Apparently, neither do Presidents)

Cathal Kelly
The Globe and Mail
Originally posted 20 April 20

Here is an excerpt:

This is what happens when we depend on celebrities to amplify good advice. The ones who have bad advice will feel similarly empowered. You can see where this particular case slid off the rails.

Djokovic has spent years trying to curate an identity as a sports brand. Early on, he tried the Tiger Beat route, a la Rafael Nadal. When that didn’t work, he tried haughty and detached, a la Roger Federer. Same result.

Some time around 2010, Djokovic decided to go Full Weirdo. He gave up gluten, got into cosmology and decided to present himself as a sort of seeker of universal truths. He even let everyone know that he’d been visiting a Buddhist temple during Wimbledon because … well, who knows what enlightenment and winning at tennis have to do with each other?

Nobody really got his new act, but this switch happened to coincide with Djokovic’s rise to the top. So he stuck with it.

This went hand in hand with an irrepressibly chirpy public persona, one so calculatedly ingratiating that it often had the opposite effect.

It wasn’t a terrible strategy. Highly successful sporting oddbods usually become cult stars. If they hang on long enough, they find general acceptance.

But it didn’t turn out for Djokovic. Even now that he is arguably the greatest men’s player of all time, he still can’t manage the trick. There’s just something about the guy that seems a bit not-of-this-world.

The info is here.

Wednesday, March 11, 2020

Expertise in Child Abuse?

Dr. Woods, from a YouTube video
Mike Hixenbaugh & Taylor Mirfendereski
NBCnews.com
Originally posted 14 Feb 20

Here is an excerpt:

Contrary to Woods’ testimony, there are more than 375 child abuse pediatricians certified by the American Board of Pediatrics in the U.S., all of whom have either completed an extensive fellowship program — first offered, not three, but nearly 15 years ago, while Woods was still in medical school — or spent years examining cases of suspected abuse prior to the creation of the medical subspecialty in 2009. The doctors are trained to differentiate accidental from inflicted injuries, which child abuse pediatricians say makes them better qualified than other doctors to determine whether a child has been abused. At least three physicians have met those qualifications and are practicing as board-certified child abuse pediatricians in the state of Washington.

Woods is not one of them.

Despite her lack of fellowship training, state child welfare and law enforcement officials in Washington have granted Woods remarkable influence over their decisions about whether to remove children from parents or pursue criminal charges, NBC News and KING 5 found. In four cases reviewed by reporters, child welfare workers took children from parents based on Woods’ reports — including some in which Woods misstated key facts, according to a review of records — despite contradictory opinions from other medical experts who said they saw no evidence of abuse.

In one instance, a pediatrician, Dr. Niran Al-Agba, insisted that a 2-year-old child’s bruise matched her parents’ description of an accidental fall onto a heating grate in their home. But Child Protective Services workers, who’d gotten a call from the child’s day care after someone noticed the bruise, asked Woods to look at photos of the injury.

Woods reported that the mark was most likely the result of abuse, even though she’d never seen the child in person or talked to the parents. The agency sided with her. To justify that decision, the Child Protective Services worker described Woods as “a physician with extensive training and experience in regard to child abuse and neglect,” according to a written report reviewed by reporters.

The info is here.

Monday, March 2, 2020

The Dunning-Kruger effect, or why the ignorant think they’re experts

Alexandru Micu
zmescience.com
Originally posted 13 Feb 20

Here is an excerpt:

It’s not specific only to technical skills but plagues all walks of human existence equally. One study found that 80% of drivers rate themselves as above average, which is literally impossible because that’s not how averages work. We tend to gauge our own relative popularity the same way.

It isn’t limited to people with low or nonexistent skills in a certain matter, either — it works on pretty much all of us. In their first study, Dunning and Kruger also found that students who scored in the top quartile (25%) routinely underestimated their own competence.

A fuller definition of the Dunning-Kruger effect would be that it represents a bias in estimating our own ability that stems from our limited perspective. When we have a poor or nonexistent grasp on a topic, we literally know too little of it to understand how little we know. Those who do possess the knowledge or skills, however, have a much better idea of where they sit. But they also think that if a task is clear and simple to them, it must be so for everyone else as well.

A person in the first group and one in the second group are equally liable to use their own experience and background as the baseline and kinda just take it for granted that everyone is near that baseline. They both partake in the “illusion of confidence” — for one, that confidence is in themselves, for the other, in everyone else.

The info is here.

Wednesday, January 8, 2020

Can expert bias be reduced in medical guidelines?

Sheldon Greenfield
BMJ 2019; 367
https://doi.org/10.1136/bmj.l6882 

Here are two excerpts:

Despite robust study designs, even double blind randomised controlled trials can be subject to subtle forms of bias. This can be because of the financial conflicts of interest of the authors, intellectual or disciplinary based opinions, pressure on researchers from sponsors, or conflicting values. For example, some researchers may favour mortality over quality of life as a primary outcome, demonstrating a value conflict. The quality of evidence is often uneven and can include underappreciated sources of bias. This makes interpreting the evidence difficult, which results in guideline developers turning to “experts” to translate it into clinical practice recommendations.

Can we be confident that these experts are objective and free of bias? A 2011 Institute of Medicine (now known as the National Academy of Medicine) report1 challenged the assumption of objectivity among guideline development experts.

(cut)

The science that supports clinical medicine is constantly evolving. The pace of that evolution is increasing.

There is an urgent imperative to generate and update accurate, unbiased, clinical practice guidelines. So, what can we do now? I have two suggestions.

Firstly, the public, which may include physicians, nurses, and other healthcare providers dependent on guidelines, should advocate for organisations like the ECRI Institute and its international counterparts to be supported and looked to for setting standards.

Secondly, we should continue to examine the details and principles of “shared decision making” and other initiatives like it, so that doctors and patients can be as clear as possible in the face of uncertain evidence about medical treatments and recommendations.

It is an uphill battle, but one worth fighting.

Thursday, July 11, 2019

The Business of Health Care Depends on Exploiting Doctors and Nurses

Danielle Ofri
The New York Times
Originally published June 8, 2019

One resource seems infinite and free: the professionalism of caregivers.

You are at your daughter’s recital and you get a call that your elderly patient’s son needs to talk to you urgently.  A colleague has a family emergency and the hospital needs you to work a double shift.  Your patient’s M.R.I. isn’t covered and the only option is for you to call the insurance company and argue it out.  You’re only allotted 15 minutes for a visit, but your patient’s medical needs require 45.

These quandaries are standard issue for doctors and nurses.  Luckily, the response is usually standard issue as well: An overwhelming majority do the right thing for their patients, even at a high personal cost.

It is true that health care has become corporatized to an almost unrecognizable degree.  But it is also true that most clinicians remain committed to the ethics that brought them into the field in the first place.  This makes the hospital an inspiring place to work.

Increasingly, though, I’ve come to the uncomfortable realization that this ethic that I hold so dear is being cynically manipulated.

By now, corporate medicine has milked just about all the “efficiency” it can out of the system.  With mergers and streamlining, it has pushed the productivity numbers about as far as they can go.

But one resource that seems endless — and free — is the professional ethic of medical staff members.

This ethic holds the entire enterprise together.  If doctors and nurses clocked out when their paid hours were finished, the effect on patients would be calamitous.  Doctors and nurses know this, which is why they don’t shirk.  The system knows it, too, and takes advantage.

The demands on medical professionals have escalated relentlessly in the past few decades, without a commensurate expansion of time and resources.  For starters, patients are sicker these days.  The medical complexity per patient — the number and severity of chronic conditions — has steadily increased, meaning that medical encounters are becoming ever more involved.  They typically include more illnesses to treat, more medications to administer, more complications to handle — all in the same-length office or hospital visit.

The information is here.

Monday, December 24, 2018

Your Intuition Is Wrong, Unless These 3 Conditions Are Met

Emily Zulz
www.thinkadvisor.com
Originally posted November 16, 2018

Here is an excerpt:

“Intuitions of master chess players when they look at the board [and make a move], they’re accurate,” he said. “Everybody who’s been married could guess their wife’s or their husband’s mood by one word on the telephone. That’s an intuition and it’s generally very good, and very accurate.”

According to Kahneman, who’s studied when one can trust intuition and when one cannot, there are three conditions that need to be met in order to trust one’s intuition.

The first is that there has to be some regularity in the world that someone can pick up and learn.

“So, chess players certainly have it. Married people certainly have it,” Kahnemen explained.

However, he added, people who pick stocks in the stock market do not have it.

“Because, the stock market is not sufficiently regular to support developing that kind of expert intuition,” he explained.

The second condition for accurate intuition is “a lot of practice,” according to Kahneman.

And the third condition is immediate feedback. Kahneman said that “you have to know almost immediately whether you got it right or got it wrong.”

The info is here.