Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Thursday, November 30, 2017

Artificial Intelligence & Mental Health

Smriti Joshi
Chatbot News Daily
Originally posted

Here is an excerpt:

There are many barriers to getting quality mental healthcare, from searching for a provider who practices in a user's geographical location to screening multiple potential therapists in order to find someone you feel comfortable speaking with. The stigma associated with seeking mental health treatment often leaves people silently suffering from a psychological issue. These barriers stop many people from finding help and AI is being looked at a potential tool to bridge this gap between service providers and service users.

Imagine how many people would be benefitted if artificial intelligence could bring quality and affordable mental health support to anyone with an internet connection. A psychiatrist or psychologist examines a person’s tone, word choice, and the length of a phrase etc and these are all crucial cues to understanding what’s going on in someone’s mind. Machine learning is now being applied by researchers to diagnose people with mental disorders. Harvard University and University of Vermont researchers are working on integrating machine learning tools and Instagram to improve depression screening. Using color analysis, metadata, and algorithmic face detection, they were able to reach 70 percent accuracy in detecting signs of depression. The research wing at IBM is using transcripts and audio from psychiatric interviews, coupled with machine learning techniques, to find patterns in speech to help clinicians accurately predict and monitor psychosis, schizophrenia, mania, and depression. A research, led by John Pestian, a professor at Cincinnati Children’s Hospital Medical Centre showed that machine learning is up to 93 percent accurate in identifying a suicidal person.

The post is here.

Why We Should Be Concerned About Artificial Superintelligence

Matthew Graves
Skeptic Magazine
Originally published November 2017

Here is an excerpt:

Our intelligence is ultimately a mechanistic process that happens in the brain, but there is no reason to assume that human intelligence is the only possible form of intelligence. And while the brain is complex, this is partly an artifact of the blind, incremental progress that shaped it—natural selection. This suggests that developing machine intelligence may turn out to be a simpler task than reverse- engineering the entire brain. The brain sets an upper bound on the difficulty of building machine intelligence; work to date in the field of artificial intelligence sets a lower bound; and within that range, it’s highly uncertain exactly how difficult the problem is. We could be 15 years away from the conceptual breakthroughs required, or 50 years away, or more.

The fact that artificial intelligence may be very different from human intelligence also suggests that we should be very careful about anthropomorphizing AI. Depending on the design choices AI scientists make, future AI systems may not share our goals or motivations; they may have very different concepts and intuitions; or terms like “goal” and “intuition” may not even be particularly applicable to the way AI systems think and act. AI systems may also have blind spots regarding questions that strike us as obvious. AI systems might also end up far more intelligent than any human.

The last possibility deserves special attention, since superintelligent AI has far more practical significance than other kinds of AI.

AI researchers generally agree that superintelligent AI is possible, though they have different views on how and when it’s likely to be developed. In a 2013 survey, top-cited experts in artificial intelligence assigned a median 50% probability to AI being able to “carry out most human professions at least as well as a typical human” by the year 2050, and also assigned a 50% probability to AI greatly surpassing the performance of every human in most professions within 30 years of reaching that threshold.

The article is here.

Wednesday, November 29, 2017

The Hype of Virtual Medicine

Ezekiel J. Emanuel
The Wall Street Journal
Originally posted Nov. 10, 2017

Here is an excerpt:

But none of this will have much of an effect on the big and unsolved challenge for American medicine: how to change the behavior of patients. According to the Centers for Disease Control and Prevention, fully 86% of all health care spending in the U.S. is for patients with chronic illness—emphysema, arthritis and the like. How are we to make real inroads against these problems? Patients must do far more to monitor their diseases, take their medications consistently and engage with their primary-care physicians and nurses. In the longer term, we need to lower the number of Americans who suffer from these diseases by getting them to change their habits and eat healthier diets, exercise more and avoid smoking.

There is no reason to think that virtual medicine will succeed in inducing most patients to cooperate more with their own care, no matter how ingenious the latest gizmos. Many studies that have tried some high-tech intervention to improve patients’ health have failed.

Consider the problem of patients who do not take their medication properly, leading to higher rates of complications, hospitalization and even mortality. Researchers at Harvard, in collaboration with CVS, published a study in JAMA Internal Medicine in May comparing different low-cost devices for encouraging patients to take their medication as prescribed. The more than 50,000 participants were randomly assigned to one of three options: high-tech pill bottles with digital timer caps, pillboxes with daily compartments or standard plastic pillboxes. The high-tech pill bottles did nothing to increase compliance.

Other efforts have produced similar failures.

The article is here.

A Lost World

Michael Sacasas
thefrailestthing.com
Originally posted January 29, 2017

Here is the conclusion:

Rather, it is a situation in which moral evaluations themselves have shifted. It is not that some people now lied and called an act of thoughtless aggression a courageous act. It is that what had before been commonly judged to be an act of thoughtless aggression was now judged by some to be a courageous act. In other words, it would appear that in very short order, moral judgments and the moral vocabulary in which they were expressed shifted dramatically.

It brings to mind Hannah Arendt’s frequent observation about how quickly the self-evidence of long-standing moral principles were overturned in Nazi Germany: “… it was as though morality suddenly stood revealed in the original meaning of the word, as a set of mores, customs and manners, which could be exchanged for another set with hardly more trouble than it would take to change the table manners of an individual or a people.”

It is shortsighted, at this juncture, to ask how we can find agreement or even compromise. We do not, now, even know how to disagree well; nothing like an argument in the traditional sense is being had. It is an open question whether anyone can even be said to be speaking intelligibly to anyone who does not already fully agree with their positions and premises. The common world that is both the condition of speech and its gift to us is withering away. A rift has opened up in our political culture that will not be mended until we figure out how to reconstruct the conditions under which speech can once again become meaningful. Until then, I fear, the worst is still before us.

The post is here.

Tuesday, November 28, 2017

Trusting big health data

Angela Villanueva
Baylor College of Medicine Blogs
Originally posted November 10, 2017

Here is an excerpt:

Potentially exacerbating this mistrust is a sense of loss of privacy and absence of control over information describing us and our habits. Given the extent of current “everyday” data collection and sharing for marketing and other purposes, this lack of trust is not unreasonable.

Health information sharing makes many people uneasy, particularly because of the potential harms such as insurance discrimination or stigmatization. Data breaches like the recent Equifax hack may add to these concerns and affect people’s willingness to share their health data.

But it is critical to encourage members of all groups to participate in big data initiatives focused on health in order for all to benefit from the resulting discoveries. My colleagues and I recently published an article detailing eight guiding principles for successful data sharing; building trust is one of them.

Here is the article.

Don’t Nudge Me: The Limits of Behavioral Economics in Medicine

Aaron E. Carroll
The New York Times - The Upshot
Originally posted November 6, 2017

Here is an excerpt:

But those excited about the potential of behavioral economics should keep in mind the results of a recent study. It pulled out all the stops in trying to get patients who had a heart attack to be more compliant in taking their medication. (Patients’ adherence at such a time is surprisingly low, even though it makes a big difference in outcomes, so this is a major problem.)

Researchers randomly assigned more than 1,500 people to one of two groups. All had recently had heart attacks. One group received the usual care. The other received special electronic pill bottles that monitored patients’ use of medication. Those patients who took their drugs were entered into a lottery in which they had a 20 percent chance to receive $5 and a 1 percent chance to win $50 every day for a year.

That’s not all. The lottery group members could also sign up to have a friend or family member automatically be notified if they didn’t take their pills so that they could receive social support. They were given access to special social work resources. There was even a staff engagement adviser whose specific duty was providing close monitoring and feedback, and who would remind patients about the importance of adherence.

This was a kitchen-sink approach. It involved direct financial incentives, social support nudges, health care system resources and significant clinical management. It failed.

The article is here.

Monday, November 27, 2017

Social Media Channels in Health Care Research and Rising Ethical Issues

Samy A. Azer
AMA Journal of Ethics. November 2017, Volume 19, Number 11: 1061-1069.

Abstract

Social media channels such as Twitter, Facebook, and LinkedIn have been used as tools in health care research, opening new horizons for research on health-related topics (e.g., the use of mobile social networking in weight loss programs). While there have been efforts to develop ethical guidelines for internet-related research, researchers still face unresolved ethical challenges. This article investigates some of the risks inherent in social media research and discusses how researchers should handle challenges related to confidentiality, privacy, and consent when social media tools are used in health-related research.

Here is an excerpt:

Social Media Websites and Ethical Challenges

While one may argue that regardless of the design and purpose of social media websites (channels) all information conveyed through social media should be considered public and therefore usable in research, such a generalization is incorrect and does not reflect the principles we follow in other types of research. The distinction between public and private online spaces can blur, and in some situations it is difficult to draw a line. Moreover, as discussed later, social media channels operate under different rules than research, and thus using these tools in research may raise a number of ethical concerns, particularly in health-related research. Good research practice fortifies high-quality science; ethical standards, including integrity; and the professionalism of those conducting the research. Importantly, it ensures the confidentiality and privacy of information collected from individuals participating in the research. Yet, in social media research, there are challenges to ensuring confidentiality, privacy, and informed consent.

The article is here.

Suicide Is Not The Same As "Physician Aid In Dying"

American Association of Suicidology
Suicide Is Not The Same As "Physician Aid In Dying"
Approved October 30, 2017

Executive summary 

The American Association of Suicidology recognizes that the practice of physician aid in dying, also called physician assisted suicide, Death with Dignity, and medical aid in dying, is distinct from the behavior that has been traditionally and ordinarily described as “suicide,” the tragic event our organization works so hard to prevent. Although there may be overlap between the two categories, legal physician assisted deaths should not be considered to be cases of suicide and are therefore a matter outside the central focus of the AAS.

(cut)

Conclusion 

In general, suicide and physician aid in dying are conceptually, medically, and legally different phenomena, with an undetermined amount of overlap between these two categories. The American Association of Suicidology is dedicated to preventing suicide, but this has no bearing on the reflective, anticipated death a physician may legally help a dying patient facilitate, whether called physician-assisted suicide, Death with Dignity, physician assisted dying, or medical aid in dying. In fact, we believe that the term “physician-assisted suicide” in itself constitutes a critical reason why these distinct death categories are so often conflated, and should be deleted from use. Such deaths should not be considered to be cases of suicide and are therefore a matter outside the central focus of the AAS.

The full document is here.

Sunday, November 26, 2017

The Wisdom in Virtue: Pursuit of Virtue Predicts Wise Reasoning About Personal Conflicts

Alex C. Huynh, Harrison Oakes, Garrett R. Shay, & Ian McGregor
Psychological Science
Article first published online: October 3, 2017

Abstract

Most people can reason relatively wisely about others’ social conflicts, but often struggle to do so about their own (i.e., Solomon’s paradox). We suggest that true wisdom should involve the ability to reason wisely about both others’ and one’s own social conflicts, and we investigated the pursuit of virtue as a construct that predicts this broader capacity for wisdom. Results across two studies support prior findings regarding Solomon’s paradox: Participants (N = 623) more strongly endorsed wise-reasoning strategies (e.g., intellectual humility, adopting an outsider’s perspective) for resolving other people’s social conflicts than for resolving their own. The pursuit of virtue (e.g., pursuing personal ideals and contributing to other people) moderated this effect of conflict type. In both studies, greater endorsement of the pursuit of virtue was associated with greater endorsement of wise-reasoning strategies for one’s own personal conflicts; as a result, participants who highly endorsed the pursuit of virtue endorsed wise-reasoning strategies at similar levels for resolving their own social conflicts and resolving other people’s social conflicts. Implications of these results and underlying mechanisms are explored and discussed.

Here is an excerpt:

We propose that the litmus test for wise character is whether one can reason wisely about one’s own social conflicts. As did the biblical King Solomon, people tend to reason more wisely about others’ social conflicts than their own (i.e., Solomon’s paradox; Grossmann & Kross, 2014, see also Mickler & Staudinger, 2008, for a discussion of personal vs. general wisdom). Personal conflicts impede wise reasoning because people are more likely to immerse themselves in their own perspective and emotions, relegating other perspectives out of awareness, and increasing certainty regarding preferred perspectives (Kross & Grossmann, 2012; McGregor, Zanna, Holmes, & Spencer, 2001). In contrast, reasoning about other people’s conflicts facilitates wise reasoning through the adoption of different viewpoints and the avoidance of sociocognitive biases (e.g., poor recognition of one’s own shortcomings—e.g., Pronin, Olivola, & Kennedy, 2008). In the present research, we investigated whether virtuous motives facilitate wisdom about one’s own conflicts, enabling one to pass the litmus test for wise character.

The article is here.