Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Sunday, June 20, 2021

Artificial intelligence research may have hit a dead end

Thomas Nail
salon.com
Originally published 30 April 21

Here is an excerpt:

If it's true that cognitive fluctuations are requisite for consciousness, it would also take time for stable frequencies to emerge and then synchronize with one another in resting states. And indeed, this is precisely what we see in children's brains when they develop higher and more nested neural frequencies over time.

Thus, a general AI would probably not be brilliant in the beginning. Intelligence evolved through the mobility of organisms trying to synchronize their fluctuations with the world. It takes time to move through the world and learn to sync up with it. As the science fiction author Ted Chiang writes, "experience is algorithmically incompressible." 

This is also why dreaming is so important. Experimental research confirms that dreams help consolidate memories and facilitate learning. Dreaming is also a state of exceptionally playful and freely associated cognitive fluctuations. If this is true, why should we expect human-level intelligence to emerge without dreams? This is why newborns dream twice as much as adults, if they dream during REM sleep. They have a lot to learn, as would androids.

In my view, there will be no progress toward human-level AI until researchers stop trying to design computational slaves for capitalism and start taking the genuine source of intelligence seriously: fluctuating electric sheep.

Saturday, June 19, 2021

Preparing for the Next Generation of Ethical Challenges Concerning Heritable Human Genome Editing

Robert Klitzman
The American Journal of Bioethics
(2021) Volume 21 (6), 1-4.

Here is the conclusion

Moving Forward

Policymakers will thus need to make complex and nuanced risk/benefit calculations regarding costs and extents of treatments, ages of onset, severity of symptoms, degrees of genetic penetrance, disease prevalence, future scientific benefits, research costs, appropriate allocations of limited resources, and questions of who should pay.

Future efforts should thus consider examining scientific and ethical challenges in closer conjunction, not separated off, and bring together the respective strengths of the Commission’s and of the WHO Committee’s approaches. The WHO Committee includes broader stakeholders, but does not yet appear to have drawn conclusions regarding such specific medical and scientific scenarios (WHO 2020). These two groups’ respective memberships also differ in instructive ways that can mutually inform future deliberations. Among the Commission’s 18 chairs and members, only two appear to work primarily in ethics or policy; the majority are scientists (National Academy of Medicine, the National Academies of Sciences and the Royal Society 2020). In contrast, the WHO Committee includes two chairs and 16 members, with both chairs and the majority of members working primarily in ethics, policy or law (WHO 2020). ASRM and other countries’ relevant professional organizations should also stipulate that physicians and healthcare professionals should not be involved in any way in the care of patients using germline editing abroad.

The Commission’s Report thus provides valuable insights and guidelines, but multiple stakeholders will likely soon confront additional, complex dilemmas involving interplays of both science and ethics that also need urgent attention.

Friday, June 18, 2021

Wise teamwork: Collective confidence calibration predicts the effectiveness of group discussion

Silver, I, Mellers, B.A., & Tetlock, P.E.
Journal of Experimental Social Psychology
Volume 96, September 2021.

Abstract

‘Crowd wisdom’ refers to the surprising accuracy that can be attained by averaging judgments from independent individuals. However, independence is unusual; people often discuss and collaborate in groups. When does group interaction improve vs. degrade judgment accuracy relative to averaging the group's initial, independent answers? Two large laboratory studies explored the effects of 969 face-to-face discussions on the judgment accuracy of 211 teams facing a range of numeric estimation problems from geographic distances to historical dates to stock prices. Although participants nearly always expected discussions to make their answers more accurate, the actual effects of group interaction on judgment accuracy were decidedly mixed. Importantly, a novel, group-level measure of collective confidence calibration robustly predicted when discussion helped or hurt accuracy relative to the group's initial independent estimates. When groups were collectively calibrated prior to discussion, with more accurate members being more confident in their own judgment and less accurate members less confident, subsequent group interactions were likelier to yield increased accuracy. We argue that collective calibration predicts improvement because groups typically listen to their most confident members. When confidence and knowledge are positively associated across group members, the group's most knowledgeable members are more likely to influence the group's answers.

Conclusion

People often display exaggerated beliefs about their skills and knowledge. We misunderstand and over-estimate our ability to answer general knowledge questions (Arkes, Christensen, Lai, & Blumer, 1987), save for a rainy day (Berman, Tran, Lynch Jr, & Zauberman, 2016), and resist unhealthy foods (Loewenstein, 1996), to name just a few examples. Such failures of calibration can have serious consequences, hindering our ability to set goals (Kahneman & Lovallo, 1993), make plans (Janis, 1982), and enjoy experiences (Mellers & McGraw, 2004). Here, we show that collective calibration also predicts the effectiveness of group discussions. In the context of numeric estimation tasks, poorly calibrated groups were less likely to benefit from working together, and, ultimately, offered less accurate answers. Group interaction is the norm, not the exception. Knowing what we know (and what we don't know) can help predict whether interactions will strengthen or weaken crowd wisdom.

Thursday, June 17, 2021

Biased Benevolence: The Perceived Morality of Effective Altruism Across Social Distance

Law, K. F., Campbell, D., & Gaesser, B. 
(2019, July 11). 
https://doi.org/10.31234/osf.io/qzx67

Abstract

Is altruism always morally good, or is the morality of altruism fundamentally shaped by the social opportunity costs that often accompany helping decisions? Across five studies, we reveal that, although helping both socially closer and socially distant others is generally perceived favorably (Study 1), in cases of realistic tradeoffs in social distance for gains in welfare where helping socially distant others necessitates not helping socially closer others with the same resources, helping is deemed as less morally acceptable (Studies 2-5). Making helping decisions at a cost to socially closer others also negatively affects judgments of relationship quality (Study 3) and in turn, decreases cooperative behavior with the helper (Study 4). Ruling out an alternative explanation of physical distance accounting for the effects in Studies 1-4, social distance continued to impact moral acceptability when physical distance across social targets was matched (Study 5). These findings reveal that attempts to decrease biases in helping may have previously unconsidered consequences for moral judgments, relationships, and cooperation.

General Discussion

When judging the morality of altruistic tradeoffs in social distance for gains in welfare advocated by the philosophy and social movement of effective altruism, we find that the perceived morality of altruism is graded by social distance. People consistently view socially distant altruism as less morally acceptable as the person not receiving help becomes socially closer to the agent helping. This suggests that whereas altruism is generally evaluated as morally praiseworthy, the moral calculus of altruism flexibly shifts according to the social distance between the person offering aid and the people in need. Such findings highlight the empirical value and theoretical importance of investigating moral judgments situated in real-world social contexts.

Wednesday, June 16, 2021

Lazy, Not Biased: Susceptibility to Partisan Fake News Is Better Explained by Lack of Reasoning Than by Motivated Reasoning

Pennycook, G. & Rand, D. G.
Cognition. (2019)
Volume 188, July 2019, Pages 39-50

Abstract

Why do people believe blatantly inaccurate news headlines (“fake news”)? Do we use our reasoning abilities to convince ourselves that statements that align with our ideology are true, or does reasoning allow us to effectively differentiate fake from real regardless of political ideology? Here we test these competing accounts in two studies (total N = 3,446 Mechanical Turk workers) by using the Cognitive Reflection Test (CRT) as a measure of the propensity to engage in analytical reasoning. We find that CRT performance is negatively correlated with the perceived accuracy of fake news, and positively correlated with the ability to discern fake news from real news – even for headlines that align with individuals’ political ideology. Moreover, overall discernment was actually better for ideologically aligned headlines than for misaligned headlines. Finally, a headline-level analysis finds that CRT is negatively correlated with perceived accuracy of relatively implausible (primarily fake) headlines, and positively correlated with perceived accuracy of relatively plausible (primarily real) headlines. In contrast, the correlation between CRT and perceived accuracy is unrelated to how closely the headline aligns with the participant’s ideology. Thus, we conclude that analytic thinking is used to assess the plausibility of headlines, regardless of whether the stories are consistent or inconsistent with one’s political ideology. Our findings therefore suggest that susceptibility to fake news is driven more by lazy thinking than it is by partisan bias per se – a finding that opens potential avenues for fighting fake news.

Highlights

• Participants rated perceived accuracy of fake and real news headlines.

• Analytic thinking was associated with ability to discern between fake and real.

• We found no evidence that analytic thinking exacerbates motivated reasoning.

• Falling for fake news is more a result of a lack of thinking than partisanship.

Tuesday, June 15, 2021

Diagnostic Mistakes a Big Contributor to Malpractice Suits, Study Finds

Joyce Friedan
MedPageToday.com
Originally posted 26 May 21

Here are two excerpts

One problem is that "healthcare is inherently risky," she continued. For example, "there's ever-changing industry knowledge, growing bodies of clinical options, new diseases, and new technology. There are variable work demands -- boy, didn't we experience that this past year! -- and production pressure has long been a struggle and a challenge for our providers and their teams." Not to mention variable individual competency, an aging population, complex health issues, and evolving workforces.

(cut)

Cognitive biases can also trigger diagnostic errors, Siegal said. "Anchor bias" occurs when "a provider anchors on a diagnosis, early on, and then through the course of the journey looks for things to confirm that diagnosis. Once they've confirmed it enough that 'search satisfaction' is met, that leads to premature closure" of the patient's case. But that causes a problem because "it means that there's a failure to continue exploring other options. What else could it be? It's a failure to establish, perhaps, every differential diagnosis."

To avoid this problem, providers "always want to think about, 'Am I anchoring too soon? Am I looking to confirm, rather than challenge, my diagnosis?'" she said. According to the study, 25% of cases didn't have evidence of a differential diagnosis, and 36% fell into the category of "confirmation bias" -- "I was looking for things to confirm what I knew, but there were relevant signs and symptoms or positive tests that were still present that didn't quite fit the picture, but it was close. So they were somehow discounted, and the premature closure took over and a diagnosis was made," she said.

She suggested that clinicians take a "diagnostic timeout" -- similar to a surgical timeout -- when they're arriving at a diagnosis. "What else could this be? Have I truly explored all the other possibilities that seem relevant in this scenario and, more importantly, what doesn't fit? Be sure to dis-confirm as well."

Monday, June 14, 2021

Bias Is a Big Problem. But So Is ‘Noise.’

Daniel Kahneman, O. Sibony & C.R. Sunstein
The New York Times
Originally posted 15 May 21

Here is an excerpt:

There is much evidence that irrelevant circumstances can affect judgments. In the case of criminal sentencing, for instance, a judge’s mood, fatigue and even the weather can all have modest but detectable effects on judicial decisions.

Another source of noise is that people can have different general tendencies. Judges often vary in the severity of the sentences they mete out: There are “hanging” judges and lenient ones.

A third source of noise is less intuitive, although it is usually the largest: People can have not only different general tendencies (say, whether they are harsh or lenient) but also different patterns of assessment (say, which types of cases they believe merit being harsh or lenient about). 

Underwriters differ in their views of what is risky, and doctors in their views of which ailments require treatment. 

We celebrate the uniqueness of individuals, but we tend to forget that, when we expect consistency, uniqueness becomes a liability.

Once you become aware of noise, you can look for ways to reduce it. For instance, independent judgments from a number of people can be averaged (a frequent practice in forecasting). 

Guidelines, such as those often used in medicine, can help professionals reach better and more uniform decisions. 

As studies of hiring practices have consistently shown, imposing structure and discipline in interviews and other forms of assessment tends to improve judgments of job candidates.

No noise-reduction techniques will be deployed, however, if we do not first recognize the existence of noise. 

Noise is too often neglected. But it is a serious issue that results in frequent error and rampant injustice. 

Organizations and institutions, public and private, will make better decisions if they take noise seriously.

Sunday, June 13, 2021

Philosophy in Science: Can philosophers of science permeate through science and produce scientific knowledge?

Pradeu, T., et al. (2021)
Preprint
British Journal of the Philosophy of Science

Abstract

Most philosophers of science do philosophy ‘on’ science. By contrast, others do philosophy ‘in’ science (‘PinS’), i.e., they use philosophical tools to address scientific problems and to provide scientifically useful proposals. Here, we consider the evidence in favour of a trend of this nature. We proceed in two stages. First, we identify relevant authors and articles empirically with bibliometric tools, given that PinS would be likely to infiltrate science and thus to be published in scientific journals (‘intervention’), cited in scientific journals (‘visibility’) and sometimes recognized as a scientific result by scientists (‘contribution’). We show that many central figures in philosophy of science have been involved in PinS, and that some philosophers have even ‘specialized’ in this practice. Second, we propose a conceptual definition of PinS as a process involving three conditions (raising a scientific problem, using philosophical tools to address it, and making a scientific proposal), and we ask whether the articles identified at the first stage fulfil all these conditions. We show that PinS is a distinctive, quantitatively substantial trend within philosophy of science, demonstrating the existence of a methodological continuity from science to philosophy of science.

From the Conclusion

A crucial and long-standing question for philosophers of science is how philosophy of science relates to science, including, in particular, its possible impact on science. Various important ways in which philosophy of science can have an impact on science have been documented in the past, from the influence of Mach, PoincarĂ© and Schopenhauer on the development of the theory of relativity (Rovelli [2018]) to Popper’s long-recognized influence on scientists, such as Eccles and Medawar, and some recent reflections on how best to organize science institutionally (e.g. Leonelli [2017]). Here, we identify and describe an
approach that we propose to call ‘PinS’, which adds another, in our view essential, layer to this picture.

By combining quantitative and qualitative tools, we demonstrate the existence of a corpus of articles by philosophers of science, either published in philosophy of science journals or in scientific journals, raising scientific problems and aiming to contribute to their resolution via the use of philosophical tools. PinS constitutes a subdomain of philosophy of science, which has a long history, with canonical texts and authors, but, to our knowledge, this is the first time this domain is delineated and analysed.

Saturday, June 12, 2021

Science Doesn't Work That Way

Gregory E. Kaebnick
Boston Review
Originally published 30 April 21

Here is an excerpt:

The way to square this circle is to acknowledge that what objectivity science is able to deliver derives not from individual scientists but from the social institutions and practices that structure their work. The philosopher of science Karl Popper expressed this idea clearly in his 1945 book The Open Society and Its Enemies. “There is no doubt that we are all suffering under our own system of prejudices,” he acknowledged—“and scientists are no exception to this rule.” But this is no threat to objectivity, he argued—not because scientists manage to liberate themselves from their prejudices, but rather because objectivity is “closely bound up with the social aspect of scientific method.” In particular, “science and scientific objectivity do not (and cannot) result from the attempts of an individual scientist to be ‘objective,’ but from the friendly-hostile co-operation of many scientists.” Thus Robinson Crusoe cannot be a scientist, “For there is nobody but himself to check his results.”

More recently, philosophers and historians of science such as Helen Longino, Miriam Solomon, and Naomi Oreskes have developed detailed arguments along similar lines, showing how the integrity and objectivity of scientific knowledge depend crucially on social practices. Science even sometimes advances not in spite but because of scientists’ filters and biases—whether a tendency to focus single-mindedly on a particular set of data, a desire to beat somebody else to an announcement, a contrarian streak, or an overweening self-confidence. Any vision of science that makes it depend on complete disinterestedness is doomed to make science impossible. Instead, we must develop a more widespread appreciation of the way science depends on protocols and norms that scientists have collectively developed for testing, refining, and disseminating scientific knowledge. A scientist must be able to show that research has met investigative standards, that it has been exposed to criticism, and that criticisms can be met with arguments.

The implication is that science works not so much because scientists have a special ability to filter out their biases or to access the world as it really is, but instead because they are adhering to a social process that structures their work—constraining and channeling their predispositions and predilections, their moments of eureka, their large yet inevitably limited understanding, their egos and their jealousies. These practices and protocols, these norms and standards, do not guarantee mistakes are never made. But nothing can make that guarantee. The rules of the game are themselves open to scrutiny and revision in light of argument, and that is the best we can hope for.

This way of understanding science fares better than the exalted view, which makes scientific knowledge impossible. Like all human endeavors, science is fallible, but still it warrants belief—according to how well it adheres to rules we have developed for it. What makes for objectivity and expertise is not, or not merely, the simple alignment between what one claims and how the world is, but a commitment to a process that is accepted as producing empirical adequacy.