Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Wednesday, May 8, 2019

Billions spent rebuilding Notre Dame shows lack of morality among wealthy

Gillian Fulford
Indiana Daily News Column
Originally posted April 23, 2019

Here is an excerpt:

Estimates to end world hunger are between $7 and $265 billion a year, and surely with 2,208 billionaires in the world, a few hundred could spare some cash to help ensure people aren’t starving to death. There aren’t billionaires in the news rushing to give money toward food aid, but even the richest man in Europe donated to repair the church.

Repairing churches is not a life and death matter. Churches, while culturally and religiously significant, are not necessary for life in the way that nutritious food is. Being an absurdly wealthy person who only donates money for things you find aesthetically pleasing is morally bankrupt in a world where money could literally fund the end of world hunger.

This isn’t to say that rebuilding the Notre Dame is bad — preserving culturally significant places is important. But the Roman Catholic Church is the richest religious organization in the world — it can probably manage repairing a church without the help of wealthy donors.

At a time when there are heated protests in the streets of France over taxes that unfairly effect the poor, pledging money toward buildings seems fraught. Spending billions on unnecessary buildings is a slap in the face to French people fighting for equitable wealth and tax distribution.

The info is here.

The ethics of emerging technologies

artificial intelligenceJessica Hallman
www.techxplore.com
Originally posted April 10, 2019


Here is an excerpt:

"There's a new field called data ethics," said Fonseca, associate professor in Penn State's College of Information Sciences and Technology and 2019-2020 Faculty Fellow in Penn State's Rock Ethics Institute. "We are collecting data and using it in many different ways. We need to start thinking more about how we're using it and what we're doing with it."

By approaching emerging technology with a philosophical perspective, Fonseca can explore the ethical dilemmas surrounding how we gather, manage and use information. He explained that with the rise of big data, for example, many scientists and analysts are foregoing formulating hypotheses in favor of allowing data to make inferences on particular problems.

"Normally, in science, theory drives observations. Our theoretical understanding guides both what we choose to observe and how we choose to observe it," Fonseca explained. "Now, with so much data available, science's classical picture of theory-building is under threat of being inverted, with data being suggested as the source of theories in what is being called data-driven science."

Fonseca shared these thoughts in his paper, "Cyber-Human Systems of Thought and Understanding," which was published in the April 2019 issue of the Journal of the Association of Information Sciences and Technology. Fonseca co-authored the paper with Michael Marcinowski, College of Liberal Arts, Bath Spa University, United Kingdom; and Clodoveu Davis, Computer Science Department, Universidade Federal de Minas Gerais, Belo Horizonte, Brazil.

In the paper, the researchers propose a concept to bridge the divide between theoretical thinking and a-theoretical, data-driven science.

The info is here.

Tuesday, May 7, 2019

Ethics Alone Can’t Fix Big Tech

Daniel Susser
Slate.com
Originally posted April 17, 2019

Here is an excerpt:

At a deeper level, these issues highlight problems with the way we’ve been thinking about how to create technology for good. Desperate for anything to rein in otherwise indiscriminate technological development, we have ignored the different roles our theoretical and practical tools are designed to play. With no coherent strategy for coordinating them, none succeed.

Consider ethics. In discussions about emerging technologies, there is a tendency to treat ethics as though it offers the tools to answer all values questions. I suspect this is largely ethicists’ own fault: Historically, philosophy (the larger discipline of which ethics is a part) has mostly neglected technology as an object of investigation, leaving that work for others to do. (Which is not to say there aren’t brilliant philosophers working on these issues; there are. But they are a minority.) The result, as researchers from Delft University of Technology and Leiden University in the Netherlands have shown, is that the vast majority of scholarly work addressing issues related to technology ethics is being conducted by academics trained and working in other fields.

This makes it easy to forget that ethics is a specific area of inquiry with a specific purview. And like every other discipline, it offers tools designed to address specific problems. To create a world in which A.I. helps people flourish (rather than just generate profit), we need to understand what flourishing requires, how A.I. can help and hinder it, and what responsibilities individuals and institutions have for creating technologies that improve our lives. These are the kinds of questions ethics is designed to address, and critically important work in A.I. ethics has begun to shed light on them.

At the same time, we also need to understand why attempts at building “good technologies” have failed in the past, what incentives drive individuals and organizations not to build them even when they know they should, and what kinds of collective action can change those dynamics. To answer these questions, we need more than ethics. We need history, sociology, psychology, political science, economics, law, and the lessons of political activism. In other words, to tackle the vast and complex problems emerging technologies are creating, we need to integrate research and teaching around technology with all of the humanities and social sciences.

The info is here.

Are Placebo-Controlled, Relapse Prevention Trials in Schizophrenia Research Still Necessary or Ethical?

Ryan E. Lawrence, Paul S. Appelbaum, Jeffrey A. Lieberman
JAMA Psychiatry. Published online April 10, 2019.
doi:10.1001/jamapsychiatry.2019.0275

Randomized, placebo-controlled trials have been the gold standard for evaluating the safety and efficacy of new psychotropic drugs for more than half a century. Although the US Food and Drug Administration (FDA) does not require placebo-controlled trial data to approve new drugs or marketing indications, they have become the industry standard for psychotropic drug development.

Placebos are controversial. The FDA guidelines state “when a new treatment is tested for a condition for which no effective treatment is known, there is usually no ethical problem with a study comparing the new treatment to placebo.”1 However, “in cases where an available treatment is known to prevent serious harm, such as death or irreversible morbidity, it is generally inappropriate to use a placebo control”. When new antipsychotics are developed for schizophrenia, it can be debated which guideline applies.

From the Conclusion:

We believe the time has come to cease the use of placebo in relapse prevention studies and encourage the use of active comparators that would protect patients from relapse and provide information on the comparative effectiveness of the drugs studied. We recommend that pharmaceutical companies not seek maintenance labeling if it would require placebo-controlled, relapse prevention trials. However, for putative antipsychotics with a novel mechanism of action, placebo-controlled, relapse prevention trials may still be justifiable.

The info is here.

Monday, May 6, 2019

How do we make moral decisions?

Dartmouth College
Press Release
Originally released April 18, 2019

When it comes to making moral decisions, we often think of the golden rule: do unto others as you would have them do unto you. Yet, why we make such decisions has been widely debated. Are we motivated by feelings of guilt, where we don't want to feel bad for letting the other person down? Or by fairness, where we want to avoid unequal outcomes? Some people may rely on principles of both guilt and fairness and may switch their moral rule depending on the circumstances, according to a Radboud University - Dartmouth College study on moral decision-making and cooperation. The findings challenge prior research in economics, psychology and neuroscience, which is often based on the premise that people are motivated by one moral principle, which remains constant over time. The study was published recently in Nature Communications.

"Our study demonstrates that with moral behavior, people may not in fact always stick to the golden rule. While most people tend to exhibit some concern for others, others may demonstrate what we have called 'moral opportunism,' where they still want to look moral but want to maximize their own benefit," said lead author Jeroen van Baar, a postdoctoral research associate in the department of cognitive, linguistic and psychological sciences at Brown University, who started this research when he was a scholar at Dartmouth visiting from the Donders Institute for Brain, Cognition and Behavior at Radboud University.

"In everyday life, we may not notice that our morals are context-dependent since our contexts tend to stay the same daily. However, under new circumstances, we may find that the moral rules we thought we'd always follow are actually quite malleable," explained co-author Luke J. Chang, an assistant professor of psychological and brain sciences and director of the Computational Social Affective Neuroscience Laboratory (Cosan Lab) at Dartmouth. "This has tremendous ramifications if one considers how our moral behavior could change under new contexts, such as during war," he added.

The info is here.

The research is here.

Ethical Considerations Regarding Internet Searches for Patient Information.

Charles C. Dike, Philip Candilis, Barbara Kocsis  and others
Psychiatric Services
Published Online:17 Jan 2019

Abstract

In 2010, the American Medical Association developed policies regarding professionalism in the use of social media, but it did not present specific ethical guidelines on targeted Internet searches for information about a patient or the patient’s family members. The American Psychiatric Association (APA) provided some guidance in 2016 through the Opinions of the Ethics Committee, but published opinions are limited. On behalf of the APA Ethics Committee, the authors developed a resource document describing ethical considerations regarding Internet and social media searches for patient information, from which this article has been adapted. Recommendations include the following. Except in emergencies, it is advisable to obtain a patient’s informed consent before performing such a search. The psychiatrist should be aware of his or her motivations for performing a search and should avoid doing so unless it serves the patient’s best interests. Information obtained through such searches should be handled with sensitivity regarding the patient’s privacy. The psychiatrist should consider how the search might influence the clinician-patient relationship. When interpreted with caution, Internet- and social media–based information may be appropriate to consider in forensic evaluations.

The info is here.

Sunday, May 5, 2019

When a Colleague Dies, CEOs Change How They Lead

Guoli Chen
www.barrons.com
Originally posted April 8, 2019

Here is an excerpt:

A version of my research, “That Could Have Been Me: Director Deaths, CEO Mortality Salience and Corporate Prosocial Behavior” (co-authored with Craig Crossland and Sterling Huang and forthcoming in Management Science) notes the significant impact a director’s death can have on resource allocation within a firm and on CEO’s activities, both outside and inside the organization.

For example, we saw that CEOs who’d experienced the death of a director on their boards reduced the number of outside directorships they held in the publicly listed firms. At the same time, they increased their number of directorships in non-profit organizations. It seems that thoughts of mortality had inspired a desire to make a lasting, positive contribution to society, or to jettison some priorities in favor of more pro-social ones.

We also saw differences in how CEOs led their firms. In our study, which looked at statistics taken from public firms where a director had died in the years between 1990 and 2013 and compared them with similar firms where no director had died, we saw that CEOs who’d experienced the death of a close colleague spend less efforts on the firms’ immediate growth or financial return activities. We found that there is an increase of costs-of-goods-sold, and companies they lead become less aggressive in expanding their assets and firm size, after the director death events. It could be due to the “quiet life” or “withdrawal behavior” hypotheses which suggest that CEOs become less engaged with the corporate activities after they realize the finite of life span. They may shift their time and focus from corporate to family or community activities.

Meanwhile we also observed that firms lead by these CEOs after the director death experienced an increase their corporate social responsibility (CSR) activities. CEOs with a heightened awareness of deaths will influence their firms resource allocation towards activities that provide benefits to broader stakeholders, such as employee health plans, more environmentally-friendly manufacturing processes, and charitable contributions.

The info is here.

Saturday, May 4, 2019

Moral Grandstanding in Public Discourse

Joshua Grubbs, Brandon Warmke, Justin Tosi, & Alicia James
PsyArXiv Preprints
Originally posted April 5, 2019

Abstract

Public discourse is often caustic and conflict-filled. This trend seems to be particularly evident when the content of such discourse is around moral issues (broadly defined) and when the discourse occurs on social media. Several explanatory mechanisms for such conflict have been explored in recent psychological and social-science literatures. The present work sought to examine a potentially novel explanatory mechanism defined in philosophical literature: Moral Grandstanding. According to philosophical accounts, Moral Grandstanding is the use of moral talk to seek social status. For the present work, we conducted five studies, using two undergraduate samples (Study 1, N = 361; Study 2, N = 356); an sample matched to U.S. norms for age, gender, race, income, Census region (Study 3, N = 1,063); a YouGov sample matched to U.S. demographic norms (Study 4, N = 2,000); and a brief, one-month longitudinal study of Mechanical Turk workers in the U.S. (Study 5 , Baseline N = 499, follow-up n = 296). Across studies, we found initial support for the validity of Moral Grandstanding as a construct. Specifically, moral grandstanding was associated with status-seeking personality traits, as well as greater political and moral conflict in daily life.

Here is part of the Conclusion:

Public discourse regarding morally charged topics is prone to conflict and polarization, particularly on social media platforms that tend to facilitate ideological echo chambers. The present study introduces an interdisciplinary construct called Moral Grandstanding as possible a contributing factor to this phenomenon. MG links evolutionary psychology and moral philosophy to describe the use of public moral speech to enhance one’s status or image in the eyes of others. Specifically, MG is framed as an expression of status-seeking drives in the domain of public discourse. Self-reported motivations underlying grandstanding behaviors seem to be consistent with the construct of status-seeking more broadly, seeming to represent prestige and dominance striving, both of which were found to be associated with greater interpersonal conflict and polarization.

The research is here.

Friday, May 3, 2019

Real or artificial? Tech titans declare AI ethics concerns

Matt O'Brien and Rachel Lerman
Associated Press
Originally posted April 7, 2019

Here is an excerpt:

"Ethical AI" has become a new corporate buzz phrase, slapped on internal review committees, fancy job titles, research projects and philanthropic initiatives. The moves are meant to address concerns over racial and gender bias emerging in facial recognition and other AI systems, as well as address anxieties about job losses to the technology and its use by law enforcement and the military.

But how much substance lies behind the increasingly public ethics campaigns? And who gets to decide which technological pursuits do no harm?

Google was hit with both questions when it formed a new board of outside advisers in late March to help guide how it uses AI in products. But instead of winning over potential critics, it sparked internal rancor. A little more than a week later, Google bowed to pressure from the backlash and dissolved the council.

The outside board fell apart in stages. One of the board's eight inaugural members quit within days and another quickly became the target of protests from Google employees who said her conservative views don't align with the company's professed values.

As thousands of employees called for the removal of Heritage Foundation President Kay Coles James, Google disbanded the board last week.

"It's become clear that in the current environment, (the council) can't function as we wanted," the company said in a statement.

The info is here.