Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Sunday, January 31, 2021

Free Will & The Brain

Kevin Loughran
Philosophy Now (2020)

The idea of free will touches human decision-making and action, and so the workings of the brain. So the science of the brain can inform the argument about free will. Technology, especially in the form of brain scanning, has provided new insights into what is happening in our brains prior to us taking action. And some brain studies – especially the ones led by Benjamin Libet at the University of California in San Francisco in the 1980s – have indicated the possibility of unconscious brain activity setting up our body to act on our decisions before we are conscious of having decided to act. For some people, such studies have confirmed the judgement that we lack free will. But do these studies provide sufficient data to justify such a generalisation about free will?

First, these studies do touch on the issue of how we make choices and reach decisions; but they do so in respect of some simple, and directed, tasks. For example, in one of Libet’s studies, he asked volunteers to move a hand in one direction or another and to note the time when they consciously decided to do so (50 Ideas You Really Need to Know about the Human Brain, Moher Costandi, p.60, 2013). The data these and similar brain studies provide might justly be taken to prove that when research volunteers are asked by a researcher to do one simple thing or another, and they do it, then unconscious brain processes may have moved them towards a choice a fraction of a second before they were conscious of making that choice. The question is, can they be taken to prove more than that?

To explore this question let’s first look at some of the range of choices we make in our lives day by day and week by week, then ask what they might tell us about how we come to make decisions and how this might relate to experimental results such as Libet’s. At the very least, examining the range of our choices might provide a better, wider range of research projects in the future.

Saturday, January 30, 2021

Checked by reality, some QAnon supporters seek a way out

David Klepper
Associated Press
Originally posted 28 January 21

Here are two excerpts:

It's not clear exactly how many people believe some or all of the narrative, but backers of the movement were vocal in their support for Trump and helped fuel the insurrectionists who overran the U.S. Capitol this month. QAnon is also growing in popularity overseas.

Former believers interviewed by The Associated Press liken the process of leaving QAnon to kicking a drug addiction. QAnon, they say, offers simple explanations for a complicated world and creates an online community that provides escape and even friendship.

Smith's then-boyfriend introduced her to QAnon. It was all he could talk about, she said. At first she was skeptical, but she became convinced after the death of financier Jeffrey Epstein while in federal custody facing pedophilia charges. Officials debunked theories that he was murdered, but to Smith and other QAnon supporters, his suicide while facing child sex charges was too much to accept.

Soon, Smith was spending more time on fringe websites and on social media, reading and posting about the conspiracy theory. She said she fell for QAnon content that presented no evidence, no counter arguments, and yet was all too convincing.

(cut)

“This isn't about critical thinking, of having a hypothesis and using facts to support it," Cohen said of QAnon believers. “They have a need for these beliefs, and if you take that away, because the storm did not happen, they could just move the goal posts.”

Some now say Trump's loss was always part of the plan, or that he secretly remains president, or even that Joe Biden's inauguration was created using special effects or body doubles. They insist that Trump will prevail, and powerful figures in politics, business and the media will be tried and possibly executed on live television, according to recent social media posts.

“Everyone will be arrested soon. Confirmed information,” read a post viewed 130,000 times this week on Great Awakening, a popular QAnon channel on Telegram. “From the very beginning I said it would happen.”

But a different tone is emerging in the spaces created for those who have heard enough.

“Hi my name is Joe,” one man wrote on a Q recovery channel in Telegram. “And I’m a recovering QAnoner.”

Scientific communication in a post-truth society

S. Iyengar & D. S. Massey
PNAS Apr 2019, 116 (16) 7656-7661

Abstract

Within the scientific community, much attention has focused on improving communications between scientists, policy makers, and the public. To date, efforts have centered on improving the content, accessibility, and delivery of scientific communications. Here we argue that in the current political and media environment faulty communication is no longer the core of the problem. Distrust in the scientific enterprise and misperceptions of scientific knowledge increasingly stem less from problems of communication and more from the widespread dissemination of misleading and biased information. We describe the profound structural shifts in the media environment that have occurred in recent decades and their connection to public policy decisions and technological changes. We explain how these shifts have enabled unscrupulous actors with ulterior motives increasingly to circulate fake news, misinformation, and disinformation with the help of trolls, bots, and respondent-driven algorithms. We document the high degree of partisan animosity, implicit ideological bias, political polarization, and politically motivated reasoning that now prevail in the public sphere and offer an actual example of how clearly stated scientific conclusions can be systematically perverted in the media through an internet-based campaign of disinformation and misinformation. We suggest that, in addition to attending to the clarity of their communications, scientists must also develop online strategies to counteract campaigns of misinformation and disinformation that will inevitably follow the release of findings threatening to partisans on either end of the political spectrum.

(cut)

At this point, probably the best that can be done is for scientists and their scientific associations to anticipate campaigns of misinformation and disinformation and to proactively develop online strategies and internet platforms to counteract them when they occur. For example, the National Academies of Science, Engineering, and Medicine could form a consortium of professional scientific organizations to fund the creation of a media and internet operation that monitors networks, channels, and web platforms known to spread false and misleading scientific information so as to be able to respond quickly with a countervailing campaign of rebuttal based on accurate information through Facebook, Twitter, and other forms of social media.

Friday, January 29, 2021

Moral psychology of sex robots: An experimental study

M. Koverola, et al.
Journal of Brehavioral Robots
Volume 11: Issue 1

Abstract

The idea of sex with robots seems to fascinate the general public, raising both enthusiasm and revulsion. We ran two experimental studies (Ns = 172 and 260) where we compared people’s reactions to variants of stories about a person visiting a bordello. Our results show that paying for the services of a sex robot is condemned less harshly than paying for the services of a human sex worker, especially if the payer is married. We have for the first time experimentally confirmed that people are somewhat unsure about whether using a sex robot while in a committed monogamous relationship should be considered as infidelity. We also shed light on the psychological factors influencing attitudes toward sex robots, including disgust sensitivity and interest in science fiction. Our results indicate that sex with a robot is indeed genuinely considered as sex, and a sex robot is genuinely seen as a robot; thus, we show that standard research methods on sexuality and robotics are also applicable in research on sex robotics.

(cut)

Conclusion

Our results successfully show that people condemn a married person less harshly if they pay for a robot sex worker than for a human sex worker. This likely reflects the fact that many people do not consider sex with a robot as infidelity or consider it as “cheating, but less so than with a human person”. These results therefore function as a stepping-stone into new avenues of interesting research that might be appealing to evolutionary and moral psychologists alike. Most likely, sociologists and market researchers will also be interested in increasing our understanding regarding the complex relations between humans and members of new ontological categories (robots, artificial intelligences (AIs), etc.). Future research will offer new possibilities to understand both human sexual and moral cognition by focusing on how humans relate to sexual relationships with androids beyond mere fantasies produced by science fiction like Westworld or Blade Runner. As sex robots in the near future enter mass production, public opinion will presumably stabilize regarding moral attitudes toward sex with robots.


Thursday, January 28, 2021

Automation, work and the achievement gap

Danaher, J., Nyholm, S. 
AI Ethics (2020). 

Abstract

Rapid advances in AI-based automation have led to a number of existential and economic concerns. In particular, as automating technologies develop enhanced competency, they seem to threaten the values associated with meaningful work. In this article, we focus on one such value: the value of achievement. We argue that achievement is a key part of what makes work meaningful and that advances in AI and automation give rise to a number achievement gaps in the workplace. This could limit people’s ability to participate in meaningful forms of work. Achievement gaps are interesting, in part, because they are the inverse of the (negative) responsibility gaps already widely discussed in the literature on AI ethics. Having described and explained the problem of achievement gaps, the article concludes by identifying four possible policy responses to the problem.

(cut)

Conclusion

Achievement is an important part of the well-lived life. It is the positive side of responsibility. Where we blame ourselves and others for doing bad things, we also praise ourselves for achieving positive (or value neutral) things. Achievement is particularly important when it comes to meaningful work. One of the problems with widespread automation is that it threatens to undermine at least three of the four main conditions for achievement in the workplace: it can reduce the value of work tasks; reduce the cost of committing to those work tasks; and sever the causal connection between human effort and workplace outcome. This opens up ‘achievement gaps’ in the workplace. There are, however, some potential ways to manage the threat of achievement gaps: we can focus on other aspects of meaningful work; we can find some ways to retain the human touch in the production of workplace outputs; we can emphasise the importance of teamwork in producing valuable outputs; and we can find outlets for achievement outside of the workplace.

Wednesday, January 27, 2021

What One Health System Learned About Providing Digital Services in the Pandemic

Marc Harrison
Harvard Business Review
Originally posted 11 Dec 20

Here are two excerpts:

Lesson 2: Digital care is safer during the pandemic.

A patient who’s tested positive for Covid doesn’t have to go see her doctor or go into an urgent care clinic to discuss her symptoms. Doctors and other caregivers who are providing virtual care for hospitalized Covid patients don’t face increased risk of exposure. They also don’t have to put on personal protective equipment, step into the patient’s room, then step outside and take off their PPE. We need those supplies, and telehealth helps us preserve it.

Intermountain Healthcare’s virtual hospital is especially well-suited for Covid patients. It works like this: In a regular hospital, you come into the ER, and we check you out and think you’re probably going to be okay, but you’re sick enough that we want to monitor you. So, we admit you.

With our virtual hospital — which uses a combination of telemedicine, home health, and remote patient monitoring — we send you home with a technology kit that allows us to check how you’re doing. You’ll be cared for by a virtual team, including a hospitalist who monitors your vital signs around the clock and home health nurses who do routine rounding. That’s working really well: Our clinical outcomes are excellent, our satisfaction scores are through the roof, and it’s less expensive. Plus, it frees up the hospital beds and staff we need to treat our sickest Covid patients.

(cut)

Lesson 4: Digital tools support the direction health care is headed.

Telehealth supports value-based care, in which hospitals and other care providers are paid based on the health outcomes of their patients, not on the amount of care they provide. The result is a greater emphasis on preventive care — which reduces unsustainable health care costs.

Intermountain serves a large population of at-risk, pre-paid consumers, and the more they use telehealth, the easier it is for them to stay healthy — which reduces costs for them and for us. The pandemic has forced payment systems, including the government’s, to keep up by expanding reimbursements for telehealth services.

This is worth emphasizing: If we can deliver care in lower-cost settings, we can reduce the cost of care. Some examples:
  • The average cost of a virtual encounter at Intermountain is $367 less than the cost of a visit to an urgent care clinic, physician’s office, or emergency department (ED).
  • Our virtual newborn ICU has helped us reduce the number of transports to our large hospitals by 65 a year since 2015. Not counting the clinical and personal benefits, that’s saved $350,000 per year in transportation costs.
  • Our internal study of 150 patients in one rural Utah town showed each patient saved an average of $2,000 in driving expenses and lost wages over a year’s time because he or she was able to receive telehealth care close to home. We also avoided pumping 106,460 kilograms of CO2 into the environment — and (per the following point) the town’s 24-bed hospital earned $1.6 million that otherwise would have shifted to a larger hospital in a bigger town.

Tuesday, January 26, 2021

Publish or Be Ethical? 2 Studies of Publishing Pressure & Scientific Misconduct in Research

Paruzel-Czachura M, Baran L, & Spendel Z. 
Research Ethics. December 2020. 

Abstract

The paper reports two studies exploring the relationship between scholars’ self-reported publication pressure and their self-reported scientific misconduct in research. In Study 1 the participants (N = 423) were scholars representing various disciplines from one big university in Poland. In Study 2 the participants (N = 31) were exclusively members of the management, such as dean, director, etc. from the same university. In Study 1 the most common reported form of scientific misconduct was honorary authorship. The majority of researchers (71%) reported that they had not violated ethical standards in the past; 3% admitted to scientific misconduct; 51% reported being were aware of colleagues’ scientific misconduct. A small positive correlation between perceived publication pressure and intention to engage in scientific misconduct in the future was found. In Study 2 more than half of the management (52%) reported being aware of researchers’ dishonest practices, the most frequent one of these being honorary authorship. As many as 71% of the participants report observing publication pressure in their subordinates. The primary conclusions are: (1) most scholars are convinced of their morality and predict that they will behave morally in the future; (2) scientific misconduct, particularly minor offenses such as honorary authorship, is frequently observed both by researchers (particularly in their colleagues) and by their managers; (3) researchers experiencing publication pressure report a willingness to engage in scientific misconduct in the future.

Conclusion

Our findings suggest that the notion of “publish or be ethical?” may constitute a real dilemma for the researchers. Although only 3% of our sample admitted to having engaged in scientific misconduct, 71% reported that they definitely had not violated ethical standards in the past. Furthermore, more than a half (51%) reported seeing scientific misconduct among their colleagues. We did not find a correlation between unsatisfactory work conditions and scientific misconduct, but we did find evidence to support the theory that perceived pressure to collect points is correlated with willingness to exceed ethical standards in the future.

Monday, January 25, 2021

Late Payments, Credit Scores May Predict Dementia

Judy George
MedPage Today
Originally posted 30 Nov 20

Problems paying bills and managing personal finances were evident years before a dementia diagnosis, retrospective data showed.

As early as 6 years before they were diagnosed with dementia, people with Alzheimer's disease and related dementias were more likely to miss credit account payments than their peers without dementia (7.7% vs 7.3%; absolute difference 0.4 percentage points, 95% CI 0.07-0.70), reported Lauren Hersch Nicholas, PhD, MPP, of Johns Hopkins University in Baltimore, and co-authors.

They also were more likely to develop subprime credit scores 2.5 years before their dementia diagnosis (8.5% vs 8.1%; absolute difference 0.38 percentage points, 95% CI 0.04-0.72), the researchers wrote in JAMA Internal Medicine.

Higher payment delinquency and subprime credit rates persisted for at least 3.5 years after a dementia diagnosis.

"Our study provides the first large-scale evidence of the financial symptoms of Alzheimer's disease and related dementias using administrative financial records," Nicholas said.

"These results are important because they highlight a new source of data -- consumer credit reports -- that can help detect early signs of Alzheimer's disease," she told MedPage Today. "While doctors have long believed that dementia presents in the checkbook, our study helps show that these financial symptoms are common and span years before and after diagnosis, suggesting unmet need for assistance managing money."

Sunday, January 24, 2021

Trust does not need to be human: it is possible to trust medical AI

Ferrario A, Loi M, Viganò E.
Journal of Medical Ethics 
Published Online First: 25 November 2020. 
doi: 10.1136/medethics-2020-106922

Abstract

In his recent article ‘Limits of trust in medical AI,’ Hatherley argues that, if we believe that the motivations that are usually recognised as relevant for interpersonal trust have to be applied to interactions between humans and medical artificial intelligence, then these systems do not appear to be the appropriate objects of trust. In this response, we argue that it is possible to discuss trust in medical artificial intelligence (AI), if one refrains from simply assuming that trust describes human–human interactions. To do so, we consider an account of trust that distinguishes trust from reliance in a way that is compatible with trusting non-human agents. In this account, to trust a medical AI is to rely on it with little monitoring and control of the elements that make it trustworthy. This attitude does not imply specific properties in the AI system that in fact only humans can have. This account of trust is applicable, in particular, to all cases where a physician relies on the medical AI predictions to support his or her decision making.

Here is an excerpt:

Let us clarify our position with an example. Medical AIs support decision making by the provision of predictions, often in the form of machine learning model outcomes, to identify and plan better prognoses, diagnoses and treatments.3 These outcomes are the result of complex computational processes on high-dimensional data that are difficult to understand by physicians. Therefore, it may be convenient to look at the medical AI as a ‘black box’, or an input–output system whose internal mechanisms are not directly accessible or understandable. Through a sufficient number of interactions with the medical AI, its developers and AI-savvy colleagues, and by analysing different types of outputs (eg, those of young patients or multimorbid ones), the physician may develop a mental model, that is, a set of beliefs, on the performance and error patterns of the AI. We describe this phase in the relation between the physician and the AI as the ‘mere reliance’ phase, which does not need to involve trust (or at best involves very little trust).