Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Truth Value. Show all posts
Showing posts with label Truth Value. Show all posts

Wednesday, October 19, 2022

Technology and moral change: the transformation of truth and trust

Danaher, J., Sætra, H.S. 
Ethics Inf Technol 24, 35 (2022).
https://doi.org/10.1007/s10676-022-09661-y

Abstract

Technologies can have profound effects on social moral systems. Is there any way to systematically investigate and anticipate these potential effects? This paper aims to contribute to this emerging field on inquiry through a case study method. It focuses on two core human values—truth and trust—describes their structural properties and conceptualisations, and then considers various mechanisms through which technology is changing and can change our perspective on those values. In brief, the paper argues that technology is transforming these values by changing the costs/benefits of accessing them; allowing us to substitute those values for other, closely-related ones; increasing their perceived scarcity/abundance; and disrupting traditional value-gatekeepers. This has implications for how we study other, technologically-mediated, value changes.

(cut)

Conclusion: lessons learned

Having examined our two case studies, it remains to consider whether or not there are similarities in how technology affects trust and truth, and if there are general lessons to be learned here about how technology may impact values in the future.

The two values we have considered are structurally similar and interrelated. They are both intrinsically and instrumentally valuable. They are both epistemic and practical in nature: we value truth and trust (at least in part) because they give us access to knowledge and help us to resolve the decision problems we face on a daily basis. We also see, in both case studies, similar mechanisms of value change at work. The most interesting, to our minds, are the following:
  • Technology changes the costs associated with accessing certain values, making them less or more important as a result Digital disinformation technology increases the cost of finding out the truth, but reduces the cost of finding and reinforcing a shared identity community; reliable AI and robotics gives us an (often cheaper and more efficient) substitute for trust in humans, while still giving us access to useful cognitive, emotional and physical assistance.
  • Technology makes it easier, or more attractive to trade off or substitute some values against others Digital disinformation technology allows us to obviate the need for finding out the truth and focus on other values instead; reliable machines allow us to substitute the value of reliability for the value of trust. This is a function of the plural nature of values, their scarcity, and the changing cost structure of values caused by technology.
  • Technology can make some values seem more scarce (rare, difficult to obtain), thereby increasing their perceived intrinsic value Digital disinformation makes truth more elusive, thereby increasing its perceived value which, in turn, encourages some moral communities to increase their fixation on it; robots and AI make trust in humans less instrumentally necessary, thereby increasing the expressive value of trust in others.
  • Technology can disrupt power networks, thereby altering the social gatekeepers to value to the extent that we still care about truth, digital disinformation increases the power of the epistemic elites that can help us to access the truth; trust-free or trust-alternative technologies can disrupt the power of traditional trusted third parties (professionals, experts etc.) and redistribute power onto technology or a technological elite.

Wednesday, June 22, 2016

Moral intuitions: Are philosophers experts?

Kevin Tobia, Wesley Buckwalter, and Stephen Stich
Philosophical Psychology, 26(5): 629-638.

Abstract

Recently psychologists and experimental philosophers have reported findings showing that in some cases ordinary people’s moral intuitions are affected by factors of dubious relevance to the truth of the content of the intuition. Some defend the use of intuition as evidence in ethics  by arguing that philosophers are the experts in this area, and philosophers’ moral intuitions are  both different from those of ordinary people and more reliable. We conducted two experiments indicating that philosophers and non-philosophers do indeed sometimes have different moral intuitions, but challenging the notion that philosophers have better or more reliable intuitions.

The article is here.

Wednesday, March 16, 2016

The Brain Gets Its Day in Court

By Greg Miller
The Atlantic
Originally published February 29, 2016

Here is an excerpt:

A handful of cases have made headlines in recent years, as lawyers representing convicted murderers have introduced brain scans and other tests of brain function to try to spare their client the death penalty. It didn’t always work, but Farahany’s analysis suggests that neuroscientific evidence—which she broadly defines as anything from brain scans to neuropsychological exams to bald assertions about the condition of a person’s brain—is being used in a wider variety of cases, and in the service of more diverse legal strategies, than the headlines would suggest. In fact, 60 percent of the cases in her sample involved non-capital offenses, including robbery, fraud, and drug trafficking.

Cases like Detrich’s are one example. Arguing for ineffective assistance of counsel is pretty much a legal Hail Mary. It requires proving two things: that the defense counsel failed to do their job adequately, and (raising the bar even higher) that this failure caused the trial to be unfairly skewed against the defendant. Courts have ruled previously that a defense attorney who slept through substantial parts of a trial still provided effective counsel. Not so, at least in some cases, for attorneys who failed to introduce neuroscience evidence in their client’s defense.

The article is here.

Friday, February 12, 2016

Growing use of neurobiological evidence in criminal trials, new study finds

By Emily Underwood
Science
Originally posted January 21, 2016

Here is an excerpt:

Overall, the new study suggests that neurobiological evidence has improved the U.S. criminal justice system “through better determinations of competence and considerations about the role of punishment,” says Judy Illes, a neuroscientist at the University of British Columbia, Vancouver, in Canada. That is not Farahany’s interpretation, however. With a few notable exceptions, use of neurobiological evidence in courtrooms “continues to be haphazard, ad hoc, and often ill conceived,” she and her colleagues write. Lawyers rarely heed scientists’ cautions “that the neurobiological evidence at issue is weak, particularly for making claims about individuals rather than studying between-group differences,” they add.

The article is here.

Sunday, November 8, 2015

Deconstructing the seductive allure of neuroscience explanations

Weisberg DS, Keil FC, Goodstein J, Rawson E, Gray JR.
Judgment and Decision Making, Vol. 10, No. 5, 
September 2015, pp. 429–441

Abstract

Explanations of psychological phenomena seem to generate more public interest when they contain neuroscientific information. Even irrelevant neuroscience information in an explanation of a psychological phenomenon may interfere with people's abilities to critically consider the underlying logic of this explanation. We tested this hypothesis by giving naïve adults, students in a neuroscience course, and neuroscience experts brief descriptions of psychological phenomena followed by one of four types of explanation, according to a 2 (good explanation vs. bad explanation) x 2 (without neuroscience vs. with neuroscience) design. Crucially, the neuroscience information was irrelevant to the logic of the explanation, as confirmed by the expert subjects. Subjects in all three groups judged good explanations as more satisfying than bad ones. But subjects in the two nonexpert groups additionally judged that explanations with logically irrelevant neuroscience information were more satisfying than explanations without. The neuroscience information had a particularly striking effect on nonexperts' judgments of bad explanations, masking otherwise salient problems in these explanations.

The entire article is here.