Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Tuesday, July 31, 2018

Fostering Discussion When Teaching Abortion and Other Morally and Spiritually Charged Topics

Louise P. King and Alan Penzias
AMA Journal of Ethics. July 2018, Volume 20, Number 7: 637-642.

Abstract

Best practices for teaching morally and spiritually charged topics, such as abortion, to those early in their medical training are elusive at best, especially in our current political climate. Here we advocate that our duty as educators requires that we explore these topics in a supportive environment. In particular, we must model respectful discourse for our learners in these difficult areas.

How to Approach Difficult Conversations

When working with learners early in their medical training, educators can find that best practices for discussion of morally and spiritually charged topics are elusive. In this article, we address how to meaningfully discuss and explore students’ conscientious objection to participation in a particular procedure. In particular, we consider the following questions: When, if ever, is it justifiable to define a good outcome of such teaching as changing students’ minds about their health practice beliefs, and when, if ever, is it appropriate to illuminate the negative impacts their health practice beliefs can have on patients?

The information is here.

Toward an Ethics of AI Assistants: an Initial Framework

John Danaher
Philos. Technol.
Accepted May 22, 2018

Abstract

Personal AI assistants are now nearly ubiquitous. Every leading smartphone operating system comes with a personal AI assistant that promises to help you with basic cognitive tasks: searching, planning, messaging, scheduling and so on. Usage of such devices is effectively a form of algorithmic outsourcing: getting a smart algorithm to do something on your behalf. Many have expressed concerns about this algorithmic outsourcing. They claim that it is dehumanising, leads to cognitive degeneration, and robs us of our freedom and autonomy. Some people have a more subtle view, arguing that it is problematic in those cases where its use may degrade important interpersonal virtues. In this article, I assess these objections to the use of AI assistants. I will argue that the ethics of their use is complex. There are no quick fixes or knockdown objections to the practice, but there are some legitimate concerns. By carefully analysing and evaluating the objections that have been lodged to date, we can begin to articulate an ethics of personal AI use that navigates those concerns. In the process, we can locate some paradoxes in our thinking about outsourcing and technological dependence, and we can think more clearly about what it means to live a good life in the age of smart machines.

The information is here.

Monday, July 30, 2018

Biases Make People Vulnerable to Misinformation Spread by Social Media

Giovanni Luca Ciampaglia & Filippo Mencze
Scientific American
Originally published June 21, 2018

Here is an excerpt:

The third group of biases arises directly from the algorithms used to determine what people see online. Both social media platforms and search engines employ them. These personalization technologies are designed to select only the most engaging and relevant content for each individual user. But in doing so, it may end up reinforcing the cognitive and social biases of users, thus making them even more vulnerable to manipulation.

For instance, the detailed advertising tools built into many social media platforms let disinformation campaigners exploit confirmation bias by tailoring messages to people who are already inclined to believe them.

Also, if a user often clicks on Facebook links from a particular news source, Facebook will tend to show that person more of that site’s content. This so-called “filter bubble” effect may isolate people from diverse perspectives, strengthening confirmation bias.

The information is here.

Mental health practitioners’ reported barriers to prescription of exercise for mental health consumers

KirstenWay, Lee Kannis-Dymand, Michele Lastella, Geoff P. Lovell
Mental Health and Physical Activity
Volume 14, March 2018, Pages 52-60

Abstract

Exercise is an effective evidenced-based intervention for a range of mental health conditions, however sparse research has investigated the exercise prescription behaviours of mental health practitioners as a collective, and the barriers faced in prescribing exercise for mental health. A self-report survey was completed online by 325 mental health practitioners to identify how often they prescribe exercise for various conditions and explore their perceived barriers to exercise prescription for mental health through thematic analysis. Over 70% of the sample reported prescribing exercise regularly for depression, stress, and anxiety; however infrequent rates of prescription were reported for conditions of schizophrenia, bipolar and related disorders, and substance-related disorders. Using thematic analysis 374 statements on mental health practitioners' perceived barriers to exercise prescription were grouped into 22 initial themes and then six higher-order themes. Reported barriers to exercise prescription mostly revolved around clients' practical barriers and perspectives (41.7%) and the practitioners' knowledge and perspectives (33.2%). Of these two main themes regarding perceived barriers to exercise prescription in mental health, a lack of training (14.7%) and the client's disinclination (12.6%) were initial themes which reoccurred considerably more often than others. General practitioners, mental health nurses, and mental health managers also frequently cited barriers related to a lack of organisational support and resources. Barriers to the prescription of exercise such as lack of training and client's disinclination need to be addressed in order to overcome challenges which restrict the prescription of exercise as a therapeutic intervention.

The research is here.

Sunday, July 29, 2018

White House Ethics Lawyer Finally Reaches His Breaking Point

And give up all this?
Bess Levin
Vanity Fair
Originally posted July 26, 2018

Here is an excerpt:

Politico reports that Passantino, one of the top lawyers in the White House, has plans to quit the administration by the end of the summer, leaving “a huge hole in the White House’s legal operation.” Despite the blow his loss will represent, it’s unlikely anyone will be able to convince him to stay and take one for the team, given he’s been working in what Passantino allies see as an “impossible” job. To recap: Passantino’s primary charge—the president—has refused to follow precedent and release his tax returns, and has held onto his business assets while in office. His son Eric, who runs said business along with Don Jr., says he gives his dad quarterly financial updates. He’s got a hotel down the road from the White House where foreign governments regularly stay as a way to kiss the ring. Two of his top advisers—his daughter and son-in-law—earned at least $82 million in outside income last year while serving in government. His Cabinet secretaries regularly compete with each other for the title of Most Blatantly Corrupt Trump Official. And Passantino is supposed to be “the clean-up guy” for all of it, a close adviser to the White House joked to Politico, which they can do because they’re not the one with a gig that would make even the most hardened Washington veteran cry.

The info is here.

Saturday, July 28, 2018

Costs, needs, and integration efforts shape helping behavior toward refugees

Robert Böhm, Maik M. P. Theelen, Hannes Rusch, and Paul A. M. Van Lange
PNAS June 25, 2018. 201805601; published ahead of print June 25, 2018

Abstract

Recent political instabilities and conflicts around the world have drastically increased the number of people seeking refuge. The challenges associated with the large number of arriving refugees have revealed a deep divide among the citizens of host countries: one group welcomes refugees, whereas another rejects them. Our research aim is to identify factors that help us understand host citizens’ (un)willingness to help refugees. We devise an economic game that captures the basic structural properties of the refugee situation. We use it to investigate both economic and psychological determinants of citizens’ prosocial behavior toward refugees. In three controlled laboratory studies, we find that helping refugees becomes less likely when it is individually costly to the citizens. At the same time, helping becomes more likely with the refugees’ neediness: helping increases when it prevents a loss rather than generates a gain for the refugees. Moreover, particularly citizens with higher degrees of prosocial orientation are willing to provide help at a personal cost. When refugees have to exert a minimum level of effort to be eligible for support by the citizens, these mandatory “integration efforts” further increase prosocial citizens’ willingness to help. Our results underscore that economic factors play a key role in shaping individual refugee helping behavior but also show that psychological factors modulate how individuals respond to them. Moreover, our economic game is a useful complement to correlational survey measures and can be used for pretesting policy measures aimed at promoting prosocial behavior toward refugees.

The research is here.

Friday, July 27, 2018

Morality in the Machines

Erick Trickery
Harvard Law Bulletin
Originally posted June 26, 2018

Here is an excerpt:

In February, the Harvard and MIT researchers endorsed a revised approach in the Massachusetts House’s criminal justice bill, which calls for a bail commission to study risk-assessment tools. In late March, the House-Senate conference committee included the more cautious approach in its reconciled criminal justice bill, which passed both houses and was signed into law by Gov. Charlie Baker in April.

Meanwhile, Harvard and MIT scholars are going still deeper into the issue. Bavitz and a team of Berkman Klein researchers are developing a database of governments that use risk scores to help set bail. It will be searchable to see whether court cases have challenged a risk-score tool’s use, whether that tool is based on peer-reviewed scientific literature, and whether its formulas are public.

Many risk-score tools are created by private companies that keep their algorithms secret. That lack of transparency creates due-process concerns, says Bavitz. “Flash forward to a world where a judge says, ‘The computer tells me you’re a risk score of 5 out of 7.’ What does it mean? I don’t know. There’s no opportunity for me to lift up the hood of the algorithm.” Instead, he suggests governments could design their own risk-assessment algorithms and software, using staff or by collaborating with foundations or researchers.

Students in the ethics class agreed that risk-score programs shouldn’t be used in court if their formulas aren’t transparent, according to then HLS 3L Arjun Adusumilli. “When people’s liberty interests are at stake, we really expect a certain amount of input, feedback and appealability,” he says. “Even if the thing is statistically great, and makes good decisions, we want reasons.”

The information is here.

Informed Consent and the Role of the Treating Physician

Holly Fernandez Lynch, Steven Joffe, and Eric A. Feldman
Originally posted June 21, 2018
N Engl J Med 2018; 378:2433-2438
DOI: 10.1056/NEJMhle1800071

Here are a few excerpts:

In 2017, the Pennsylvania Supreme Court ruled that informed consent must be obtained directly by the treating physician. The authors discuss the potential implications of this ruling and argue that a team-based approach to consent is better for patients and physicians.

(cut)

Implications in Pennsylvania and Beyond

Shinal has already had a profound effect in Pennsylvania, where it represents a substantial departure from typical consent practice.  More than half the physicians who responded to a recent survey conducted by the Pennsylvania Medical Society (PAMED) reported a change in the informed-consent process in their work setting; of that group, the vast majority expressed discontent with the effect of the new approach on patient flow and the way patients are served.  Medical centers throughout the state have changed their consent policies, precluding nonphysicians from obtaining patient consent to the procedures specified in the MCARE Act and sometimes restricting the involvement of physician trainees.  Some Pennsylvania institutions have also applied the Shinal holding to research, in light of the reference in the MCARE Act to experimental products and uses, despite the clear policy of the Food and Drug Administration (FDA) allowing investigators to involve other staff in the consent process.

(cut)

Selected State Informed-Consent Laws.

Although the Shinal decision is not binding outside of Pennsylvania, cases bearing on critical ethical dimensions of consent have a history of influence beyond their own jurisdictions.

The information is here.

Thursday, July 26, 2018

Virtuous technology

Mustafa Suleyman
medium.com
Originally published June 26, 2018

Hereis an excerpt:

There are at least three important asymmetries between the world of tech and the world itself. First, the asymmetry between people who develop technologies and the communities who use them. Salaries in Silicon Valley are twice the median wage for the rest of the US and the employee base is unrepresentative when it comes to gender, race, class and more. As we have seen in other fields, this risks a disconnect between the inner workings of organisations and the societies they seek to serve.

This is an urgent problem. Women and minority groups remain badly underrepresented, and leaders need to be proactive in breaking the mould. The recent spotlight on these issues has meant that more people are aware of the need for workplace cultures to change, but these underlying inequalities also make their way into our companies in more insidious ways. Technology is not value neutral — it reflects the biases of its creators — and must be built and shaped by diverse communities if we are to minimise the risk of unintended harms.

Second, there is an asymmetry of information regarding how technology actually works, and the impact that digital systems have on everyday life. Ethical outcomes in tech depend on far more than algorithms and data: they depend on the quality of societal debate and genuine accountability.

The information is here.