Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Sunday, May 19, 2019

House Democrats seek details of Trump ethics waivers

Kate Ackley
www.rollcall.com
Originally posted May 17, 2019

Rep. Elijah E. Cummings, chairman of the Oversight and Reform Committee, wants a status update on the state of the swamp in the Trump administration.

The Maryland Democrat launched an investigation late this week into the administration’s use of ethics waivers, which allow former lobbyists to work on matters they handled in their previous private sector jobs. Cummings sent letters to the White House and 24 agencies and Cabinet departments requesting copies of their ethics pledges and details of any waivers that could expose “potential conflicts of interest.”

“Although the White House committed to providing information on ethics waivers on its website, the White House has failed to disclose comprehensive information about the waivers,” Cummings wrote in a May 16 letter to White House counsel Pat Cipollone.

A White House official declined comment on the investigation, and a committee aide said the administration had not yet responded to the requests. A spokeswoman for Rep. Jim Jordan of Ohio, the top Republican on the Oversight panel, did not immediately provide a comment.

After President Donald Trump ran on a “drain the swamp” message, the Trump administration ushered in a tough-sounding ethics pledge through an executive order in January 2017 requiring officials to recuse themselves from participating in matters they had lobbied on in the previous two years. But the waivers allow appointees to circumvent those restrictions.

The info is here.

Saturday, May 18, 2019

The Neuroscience of Moral Judgment

Joanna Demaree-Cotton & Guy Kahane
Published in The Routledge Handbook of Moral Epistemology, eds. Karen Jones, Mark Timmons, and Aaron Zimmerman (Routledge, 2018).

Abstract:

This chapter examines the relevance of the cognitive science of morality to moral epistemology, with special focus on the issue of the reliability of moral judgments. It argues that the kind of empirical evidence of most importance to moral epistemology is at the psychological rather than neural level. The main theories and debates that have dominated the cognitive science of morality are reviewed with an eye to their epistemic significance.

1. Introduction

We routinely make moral judgments about the rightness of acts, the badness of outcomes, or people’s characters. When we form such judgments, our attention is usually fixed on the relevant situation, actual or hypothetical, not on our own minds. But our moral judgments are obviously the result of mental processes, and we often enough turn our attention to aspects of this process—to the role, for example, of our intuitions or emotions in shaping our moral views, or to the consistency of a judgment about a case with more general moral beliefs.

Philosophers have long reflected on the way our minds engage with moral questions—on the conceptual and epistemic links that hold between our moral intuitions, judgments, emotions, and motivations. This form of armchair moral psychology is still alive and well, but it’s increasingly hard to pursue it in complete isolation from the growing body of research in the cognitive science of morality (CSM). This research is not only uncovering the psychological structures that underlie moral judgment but, increasingly, also their neural underpinning—utilizing, in this connection, advances in functional neuroimaging, brain lesion studies, psychopharmacology, and even direct stimulation of the brain. Evidence from such research has been used not only to develop grand theories about moral psychology, but also to support ambitious normative arguments.

Friday, May 17, 2019

More than 300 overworked NHS nurses have died by suicide in just seven years

Lucy, a Liverpool student nurse, took her own life took years agoAlan Selby
The Mirror
Originally posted April 27, 2019

More than 300 nurses have taken their own lives in just seven years, shocking new figures reveal.

During the worst year, one was dying by suicide EVERY WEEK as Tory cuts began to bite deep into the NHS.

Today victims’ families call for vital early mental health training and support for young nurses – and an end to a “bullying and toxic culture” in the health service which leaves them afraid to ask for help in their darkest moments.

One mum – whose trainee nurse daughter Lucy de Oliveira killed herself while juggling other jobs to make ends meet – told us: “They’re working all hours God sends doing a really important job. Most of them would be better off working in McDonald’s. That can’t be right.”

Shadow Health Secretary Jonathan Ashworth has called for a government inquiry into the “alarming” figures – 23 per cent higher than the national average – from 2011 to 2017, the latest year on record.

“Every life lost is a desperate tragedy,” he said. “The health and wellbeing of NHS staff must never be compromised.”

The info is here.

Scientific Misconduct in Psychology: A Systematic Review of Prevalence Estimates and New Empirical Data

Johannes Stricker & Armin Günther
Zeitschrift fur Psychologie
Published online: March 29, 2019

Abstract

Spectacular cases of scientific misconduct have contributed to concerns about the validity of published results in psychology. In our systematic review, we identified 16 studies reporting prevalence estimates of scientific misconduct and questionable research practices (QRPs) in psychological research. Estimates from these studies varied due to differences in methods and scope. Unlike other disciplines, there was no reliable lower bound prevalence estimate of scientific misconduct based on identified cases available for psychology. Thus, we conducted an additional empirical investigation on the basis of retractions in the database PsycINFO. Our analyses showed that 0.82 per 10,000 journal articles in psychology were retracted due to scientific misconduct. Between the late 1990s and 2012, there was a steep increase. Articles retracted due to scientific misconduct were identified in 20 out of 22 PsycINFO subfields. These results show that measures aiming to reduce scientific misconduct should be promoted equally across all psychological subfields.

The research is here.


Thursday, May 16, 2019

It’s Our ‘Moral Responsibility’ to Give The FBI Access to Your DNA

Jennings Brown
www.gizmodo.com
Originally published April 3, 2019

A popular DNA-testing company seems to be targeting true crime fans with a new pitch to let them share their genetic information with law enforcement so cops can catch violent criminals.

Two months ago, FamilyTreeDNA raised privacy concerns after BuzzFeed revealed the company had partnered with the FBI and given the agency access to the genealogy database. Law enforcement’s use of DNA databases has been widely known since last April when California officials revealed genealogy website information was instrumental in determining the identity of the Golden State Killer. But in that case, detectives used publicly shared raw genetic data on GEDmatch. The recent news about FamilyTreeDNA marked the first known time a home DNA test company had willingly shared private genetic information with law enforcement.

Several weeks later, FamilyTreeDNA changed their rules to allow customers to block the FBI from accessing their information. “Users now have the ability to opt out of matching with DNA relatives whose accounts are flagged as being created to identify the remains of a deceased individual or a perpetrator of a homicide or sexual assault,” the company said in a statement at the time.

But now the company seems to be embracing this partnership with law enforcement with their new campaign called, “Families Want Answers.”

The info is here.

Memorial Sloan Kettering Leaders Violated Conflict-of-Interest Rules, Report Finds

Charles Ornstein and Katie Thomas
ProPublica.org
Originally posted April 4, 2019

Top officials at Memorial Sloan Kettering Cancer Center repeatedly violated policies on financial conflicts of interest, fostering a culture in which profits appeared to take precedence over research and patient care, according to details released on Thursday from an outside review.

The findings followed months of turmoil over executives’ ties to drug and health care companies at one of the nation’s leading cancer centers. The review, conducted by the law firm Debevoise & Plimpton, was outlined at a staff meeting on Thursday morning. It concluded that officials frequently violated or skirted their own policies; that hospital leaders’ ties to companies were likely considered on an ad hoc basis rather than through rigorous vetting; and that researchers were often unaware that some senior executives had financial stakes in the outcomes of their studies.

In acknowledging flaws in its oversight of conflicts of interest, the cancer center announced on Thursday an extensive overhaul of policies governing employees’ relationships with outside companies and financial arrangements — including public disclosure of doctors’ ties to corporations and limits on outside work.

The info is here.

Wednesday, May 15, 2019

Moral self-judgment is stronger for future than past actions

Sjåstad, H. & Baumeister, R.F.
Motiv Emot (2019).
https://doi.org/10.1007/s11031-019-09768-8

Abstract

When, if ever, would a person want to be held responsible for his or her choices? Across four studies (N = 915), people favored more extreme rewards and punishments for their future than their past actions. This included thinking that they should receive more blame and punishment for future misdeeds than for past ones, and more credit and reward for future good deeds than for past ones. The tendency to moralize the future more than the past was mediated by anticipating (one’s own) emotional reactions and concern about one’s reputation, which was stronger in the future as well. The findings fit the pragmatic view that people moralize the future partly to guide their choices and actions, such as by increasing their motivation to restrain selfish impulses and build long-term cooperative relationships with others. People typically believe that the future is open and changeable, while the past is not. We conclude that the psychology of moral accountability has a strong future component.

Here is a snip from Concluding Remarks

A recent article by Uhlmann, Pizarro, and Diermeier (2015) proposed an important shift in the foundation of moral psychology. Whereas most research has focused on how people judge moral actions, Uhlmann et al. proposed that the primary, focal purpose is to judge persons. They suggested that this has a prospective dimension: Ultimately, the pragmatic goal is to know whom one can cooperate with, rely on, and otherwise trust in the future. Judging past actions is a means toward predicting the future, with the focus on individual persons.

The present findings fit well with and even extend that analysis. The orientation toward the future is not limited to judging and predicting the moral character of others but also extends to oneself. If one functional purpose of morality is to promote group cohesion and cooperation in the future, people apparently think that part of that involves raising expectations and standards for their own future behavior as well.

The pre-print can be found here.

Students' Ethical Decision‐Making When Considering Boundary Crossings With Counselor Educators

Stephanie T. Burns
Counseling and Values
First published: 10 April 2019
https://doi.org/10.1002/cvj.12094

Abstract

Counselor education students (N = 224) rated 16 boundary‐crossing scenarios involving counselor educators. They viewed boundary crossings as unethical and were aware of power differentials between the 2 groups. Next, they rated the scenarios again, after reviewing 1 of 4 ethical informational resources: relevant standards in the ACA Code of Ethics (American Counseling Association, 2014), 2 different boundary‐crossing decision‐making models, and a placebo. Although participants rated all resources except the placebo as moderately helpful, these resources had little to no influence on their ethical decision‐making. Only 47% of students in the 2 ethical decision‐making model groups reported they would use the model they were exposed to in the future when contemplating boundary crossings.

Here is a portion from Implications for Practice and Training

Counselor education students took conservative stances toward the 16 boundary-crossing scenarios with counselor educators. These findings support results of previous researchers who stated that students struggle with even the smallest of boundary crossings (Kozlowski et al., 2014) because they understand that power differentials have implications for grades, evaluations, recommendation letters, and obtaining authentic skill development feedback (Gu et al., 2011). Counselor educators need to be aware that students find not providing appropriate feedback because of the counselor educator’s personal feelings toward the student, not providing students with required supervision time in practicum, and taking first authorship when the student performed all the work on the submission as being as abusive as having sex with a student.

The research is here.

Tuesday, May 14, 2019

Is ancient philosophy the future?

Donald Robertson
The Globe and Mail
Originally published April 19, 2019

Recently, a bartender in Nova Scotia showed me a quote from the Roman emperor Marcus Aurelius tattooed on his forearm. “Waste no more time arguing what a good man should be,” it said, “just be one.”

We live in an age when social media bombards everyone, especially the young, with advice about every aspect of their lives. Stoic philosophy, of which Marcus Aurelius was history’s most famous proponent, taught its followers not to waste time on diversions that don’t actually improve their character.

In recent decades, Stoicism has been experiencing a resurgence in popularity, especially among millennials. There has been a spate of popular self-help books that helped to spread the word. One of the best known is Ryan Holiday and Steven Hanselman’s The Daily Stoic, which introduced a whole new generation to the concept of philosophy, based on the classics, as a way of life. It has fuelled interest among Silicon Valley entrepreneurs. So has endorsement from self-improvement guru Tim Ferriss who describes Stoicism as the “ideal operating system for thriving in high-stress environments.”

Why should the thoughts of a Roman emperor who died nearly 2,000 years ago seem particularly relevant today, though? What’s driving this rebirth of Stoicism?

The info is here.

Who Should Decide How Algorithms Decide?

Mark Esposito, Terence Tse, Joshua Entsminger, and Aurelie Jean
Project-Syndicate
Originally published April 17, 2019

Here is an excerpt:

Consider the following scenario: a car from China has different factory standards than a car from the US, but is shipped to and used in the US. This Chinese-made car and a US-made car are heading for an unavoidable collision. If the Chinese car’s driver has different ethical preferences than the driver of the US car, which system should prevail?

Beyond culturally based differences in ethical preferences, one also must consider differences in data regulations across countries. A Chinese-made car, for example, might have access to social-scoring data, allowing its decision-making algorithm to incorporate additional inputs that are unavailable to US carmakers. Richer data could lead to better, more consistent decisions, but should that advantage allow one system to overrule another?

Clearly, before AVs take to the road en masse, we will need to establish where responsibility for algorithmic decision-making lies, be it with municipal authorities, national governments, or multilateral institutions. More than that, we will need new frameworks for governing this intersection of business and the state. At issue is not just what AVs will do in extreme scenarios, but how businesses will interact with different cultures in developing and deploying decision-making algorithms.

The info is here.

Monday, May 13, 2019

How has President Trump changed white Christians' views of 'morality'?

Brandon Showalter
The Christian Post
Originally published April 26, 2019

A notable shift has taken place within the past decade regarding how white evangelicals consider "morality" with regard to the politicians they support.

While the subject was frequently discussed during the 2016 election cycle in light of significant support then-candidate Donald Trump received from evangelical Christians, the attitude shift related to what an elected official does in his private life having any bearing on his public duties appears to have persisted over two years into his presidency, The Washington Post noted Thursday.

A 2011 Public Religion and Research Institute and Religion News Service poll found that 60 percent of white evangelicals believed that a public official who “commits an immoral act in their personal life” cannot still “behave ethically and fulfill their duties in their public and professional life.”

By October 2016, however, shortly after the release of the “Access Hollywood” tape in which President Trump was heard making lewd comments, another PRRI poll found that only 20 percent of white evangelicals answered the same question the same way.

No other religious demographic saw such a profound change.

The info is here.

How to Be an Ethical Leader: 4 Tips for Success

Sammi Caramela
www.businessnewsdaily.com
Originally posted August 27, 2018

Here is an excerpt:

Define and align your morals

Consider the values you had growing up – treat others how you want to be treated, always say "thank you," show support to those struggling, etc. But as you grow, and as society progresses, conventions change, often causing values to shift.

"This is the biggest challenge ethics face in our culture and at work, and the biggest challenge ethical leadership faces," said Matthew Kelly, founder and CEO of FLOYD Consulting and author of "The Culture Solution" (Blue Sparrow Books, 2019). "What used to be universally accepted as good and true, right and just, is now up for considerable debate. This environment of relativism makes it very difficult for values-based leaders."

Kelly added that to find success in ethical leadership, you should demonstrate how adhering to specific values benefits the mission of the organization.

"Culture is not a collection of personal preferences," he said. "Mission is king. When that ceases to be true, an organization has begun its journey toward the mediocre middle."

Ask yourself what matters to you as an individual and then align that with your priorities as a leader. Defining your morals not only expresses your authenticity, it encourages your team to do the same, creating a shared vision for all workers.

Hire those with similar ethics

While your ethics don't need to be the same as your workers', you should be able to establish common ground with them. This often starts with the hiring process and is maintained through a vision statement.

The info is here.

Sunday, May 12, 2019

Looking at the Mueller report from a mental health perspective

 Bandy X. Lee, Leonard L. Glass and Edwin B. Fisher
The Boston Globe
Updated May 9, 2019

Here is an excerpt:

These episodes demonstrate not only a lack of control over emotions but preoccupation with threats to the self. There is no room for consideration of national plans or policies, or his own role in bringing about his predicament and how he might change, but instead a singular focus on how he is a victim of circumstance and his familiar whining about unfairness.

This mindset can easily turn into rage reactions; it is commonly found in violent offenders in the criminal justice system, who perpetually consider themselves victims under attack, even as they perpetrate violence against others, often without provocation. In this manner, a “victim mentality” and paranoia are symptoms that carry a high risk of violence.

“We noted, among other things, that the president stated on more than 30 occasions that he ‘does not recall’ or ‘remember’ or have an ‘independent recollection’ of information called for by the questions. Other answers were ‘incomplete or imprecise.’ ” (Vol. II, p. C-1)

This response is from a president who, in public rallies, rarely lacks certainty, no matter how false his assertions and claims that he has “the world’s greatest memory” and “one of the great memories of all time.” His lack of recall is particularly meaningful in the context of his unprecedented mendacity, which alone is dangerous and divisive for the country. Whether he truly does not remember or is totally fabricating, either is pathological and highly dangerous in someone who has command over the largest military in the world and over thousands of nuclear weapons.

The Mueller report details numerous lies by the president, perhaps most clearly regarding his handling of the disclosure of the meeting at Trump Tower (Vol II, p. 98ff). First he denied knowing about the meeting, then described it as only about adoption, then denied crafting his son’s response, and then, in his formal response to Mueller, conceded that it was he who dictated the press release. Lying per se is not especially remarkable. Coupled with the other characteristics noted here, however, lying becomes a part of a pervasive, compelling, reflexive pattern of distraught gut reactions for handling challenges by misleading, manipulating, and blocking others’ access to the truth. Rather than being seen as bona fide alternatives, challenges are perceived as personal threats and responded to in a dangerous, no-holds-barred manner.

The info is here.

Saturday, May 11, 2019

Free Will, an Illusion? An Answer from a Pragmatic Sentimentalist Point of View

Maureen Sie
Appears in : Caruso, G. (ed.), June 2013, Exploring the Illusion of Free Will and Moral Responsibility, Rowman & Littlefield.

According to some people, diverse findings in the cognitive and neurosciences suggest that free will is an illusion: We experience ourselves as agents, but in fact our brains decide, initiate, and judge before ‘we’ do (Soon, Brass, Heinze and Haynes 2008; Libet and Gleason 1983). Others have replied that the distinction between ‘us’ and ‘our brains’ makes no sense (e.g., Dennett 2003)  or that scientists misperceive the conceptual relations that hold between free will and responsibility (Roskies 2006). Many others regard the neuro-scientific findings as irrelevant to their views on free will. They do not believe that determinist processes are incompatible with free will to begin with, hence, do not understand why deterministic processes in our brain would be (see Sie and Wouters 2008, 2010). That latter response should be understood against the background of the philosophical free will discussion. In philosophy, free will is traditionally approached as a metaphysical problem, one that needs to be dealt with in order to discuss the legitimacy of our practices of responsibility. The emergence of our moral practices is seen as a result of the assumption that we possess free will (or some capacity associated with it) and the main question discussed is whether that assumption is compatible with determinism.  In this chapter we want to steer clear from this 'metaphysical' discussion.

The question we are interested in in this chapter, is whether the above mentioned scientific findings are relevant to our use of the concept of free will when that concept is approached from a different angle. We call this different angle the 'pragmatic sentimentalist'-approach to free will (hereafter the PS-approach).  This approach can be traced back to Peter F. Strawson’s influential essay “Freedom and Resentment”(Strawson 1962).  Contrary to the metaphysical approach, the PS-approach does not understand free will as a concept that somehow precedes our moral practices. Rather it is assumed that everyday talk of free will naturally arises in a practice that is characterized by certain reactive attitudes that we take towards one another. This is why it is called 'sentimentalist.' In this approach, the practical purposes of the concept of free will are put central stage. This is why it is called 'pragmatist.'

A draft of the book chapter can be downloaded here.

Friday, May 10, 2019

Privacy, data science and personalised medicine. Time for a balanced discussion

Claudia Pagliari
LinkedIn.com Post
Originally posted March 26, 2019

There are several fundamental truths that those of us working at the intersection of data science, ethics and medical research have recognised for some time. Firstly that 'anonymised’ and ‘pseudonymised' data can potentially be re-identified through the convergence of related variables, coupled with clever inference methods (although this is by no means easy). Secondly that genetic data is not just about individuals but also about families and generations, past and future. Thirdly, as we enter an increasingly digitized society where transactional, personal and behavioural data from public bodies, businesses, social media, mobile devices and IoT are potentially linkable, the capacity of data to tell meaningful stories about us is becoming constrained only by the questions we ask and the tools we are able to deploy to get the answers. Some would say that privacy is an outdated concept, and control and transparency are the new by-words. Others either disagree or are increasingly confused and disenfranchised.

Some of the quotes from the top brass of Iceland’s DeCODE Genetics, appearing in today’s BBC’s News, neatly illustrate why we need to remain vigilant to the ethical dilemmas presented by the use of data sciences for personalised medicine. For those of you who are not aware, this company has been at the centre of innovation in population genomics since its inception in the 1990s and overcame a state outcry over privacy and consent, which led to its temporary bankruptcy, before rising phoenix-like from the ashes. The fact that its work has been able to continue in an era of increasing privacy legislation and regulation shows just how far the promise of personalized medicine has skewed the policy narrative and the business agenda in recent years. What is great about Iceland, in terms of medical research, is that it is a relatively small country that has been subjected to historically low levels of immigration and has a unique family naming system and good national record keeping, which means that the pedigree of most of its citizens is easy to trace. This makes it an ideal Petri dish for genetic researchers. And here’s where the rub is. In short, by fully genotyping only 10,000 people from this small country, with its relatively stable gene pool, and integrating this with data on their family trees - and doubtless a whole heap of questionnaires and medical records - the company has, with the consent of a few, effectively seized the data of the "entire population".

The info is here.


An Evolutionary Perspective On Free Will Belief

Cory Clark & Bo Winegard
Science Trends
Originally posted April 9, 2019

Here is an excerpt:

Both scholars and everyday people seem to agree that free will (whatever it is) is a prerequisite for moral responsibility (though note, among philosophers, there are numerous definitions and camps regarding how free will and moral responsibility are linked). This suggests that a crucial function of free will beliefs is the promotion of holding others morally responsible. And research supports this. Specifically, when people are exposed to another’s harmful behavior, they increase their broad beliefs in the human capacity for free action. Thus, believing in free will might facilitate the ability of individuals to punish harmful members of the social group ruthlessly.

But recent research suggests that free will is about more than just punishment. People might seek morally culpable agents not only when desiring to punish, but also when desiring to praise. A series of studies by Clark and colleagues (2018) found that, whereas people generally attributed more free will to morally bad actions than to morally good actions, they attributed more free will to morally good actions than morally neutral ones. Moreover, whereas free will judgments for morally bad actions were primarily driven by affective desires to punish, free will judgments for morally good actions were sensitive to a variety of characteristics of the behavior.

Thursday, May 9, 2019

The 'debate of the century': what happened when Jordan Peterson debated Slavoj Žižek

Stephen Marche
The Guardian
Originally published April 20, 2019

Here is an excerpt:

The great surprise of this debate turned out to be how much in common the old-school Marxist and the Canadian identity politics refusenik had.

One hated communism. The other hated communism but thought that capitalism possessed inherent contradictions. The first one agreed that capitalism possessed inherent contradictions. And that was basically it. They both wanted the same thing: capitalism with regulation, which is what every sane person wants. The Peterson-Žižek encounter was the ultra-rare case of a debate in 2019 that was perhaps too civil.

They needed enemies, needed combat, because in their solitudes, they had so little to offer. Peterson is neither a racist nor a misogynist. He is a conservative. He seemed, in person, quite gentle. But when you’ve said that, you’ve said everything. Somehow hectoring mobs have managed to turn him into an icon of all they are not. Remove him from his enemies and he is a very poor example of a very old thing – the type of writer whom, from Samuel Smiles’ Self-Help to Eckhart Tolle’s The Power of Now, have promised simple answers to complex problems. Rules for Life, as if there were such things.

The info is here.

The moral behavior of ethics professors: A replication-extension in German-speaking countries

Philipp Schönegger & Johannes Wagner
(2019) Philosophical Psychology, 32:4, 532-559
DOI: 10.1080/09515089.2019.1587912

Abstract

What is the relation between ethical reflection and moral behavior? Does professional reflection on ethical issues positively impact moral behaviors? To address these questions, Schwitzgebel and Rust empirically investigated if philosophy professors engaged with ethics on a professional basis behave any morally better or, at least, more consistently with their expressed values than do non-ethicist professors. Findings from their original US-based sample indicated that neither is the case, suggesting that there is no positive influence of ethical reflection on moral action. In the study at hand, we attempted to cross-validate this pattern of results in the German-speaking countries and surveyed 417 professors using a replication-extension research design. Our results indicate a successful replication of the original effect that ethicists do not behave any morally better compared to other academics across the vast majority of normative issues. Yet, unlike the original study, we found mixed results on normative attitudes generally. On some issues, ethicists and philosophers even expressed more lenient attitudes. However, one issue on which ethicists not only held stronger normative attitudes but also reported better corresponding moral behaviors was vegetarianism.

Wednesday, May 8, 2019

Billions spent rebuilding Notre Dame shows lack of morality among wealthy

Gillian Fulford
Indiana Daily News Column
Originally posted April 23, 2019

Here is an excerpt:

Estimates to end world hunger are between $7 and $265 billion a year, and surely with 2,208 billionaires in the world, a few hundred could spare some cash to help ensure people aren’t starving to death. There aren’t billionaires in the news rushing to give money toward food aid, but even the richest man in Europe donated to repair the church.

Repairing churches is not a life and death matter. Churches, while culturally and religiously significant, are not necessary for life in the way that nutritious food is. Being an absurdly wealthy person who only donates money for things you find aesthetically pleasing is morally bankrupt in a world where money could literally fund the end of world hunger.

This isn’t to say that rebuilding the Notre Dame is bad — preserving culturally significant places is important. But the Roman Catholic Church is the richest religious organization in the world — it can probably manage repairing a church without the help of wealthy donors.

At a time when there are heated protests in the streets of France over taxes that unfairly effect the poor, pledging money toward buildings seems fraught. Spending billions on unnecessary buildings is a slap in the face to French people fighting for equitable wealth and tax distribution.

The info is here.

The ethics of emerging technologies

artificial intelligenceJessica Hallman
www.techxplore.com
Originally posted April 10, 2019


Here is an excerpt:

"There's a new field called data ethics," said Fonseca, associate professor in Penn State's College of Information Sciences and Technology and 2019-2020 Faculty Fellow in Penn State's Rock Ethics Institute. "We are collecting data and using it in many different ways. We need to start thinking more about how we're using it and what we're doing with it."

By approaching emerging technology with a philosophical perspective, Fonseca can explore the ethical dilemmas surrounding how we gather, manage and use information. He explained that with the rise of big data, for example, many scientists and analysts are foregoing formulating hypotheses in favor of allowing data to make inferences on particular problems.

"Normally, in science, theory drives observations. Our theoretical understanding guides both what we choose to observe and how we choose to observe it," Fonseca explained. "Now, with so much data available, science's classical picture of theory-building is under threat of being inverted, with data being suggested as the source of theories in what is being called data-driven science."

Fonseca shared these thoughts in his paper, "Cyber-Human Systems of Thought and Understanding," which was published in the April 2019 issue of the Journal of the Association of Information Sciences and Technology. Fonseca co-authored the paper with Michael Marcinowski, College of Liberal Arts, Bath Spa University, United Kingdom; and Clodoveu Davis, Computer Science Department, Universidade Federal de Minas Gerais, Belo Horizonte, Brazil.

In the paper, the researchers propose a concept to bridge the divide between theoretical thinking and a-theoretical, data-driven science.

The info is here.

Tuesday, May 7, 2019

Ethics Alone Can’t Fix Big Tech

Daniel Susser
Slate.com
Originally posted April 17, 2019

Here is an excerpt:

At a deeper level, these issues highlight problems with the way we’ve been thinking about how to create technology for good. Desperate for anything to rein in otherwise indiscriminate technological development, we have ignored the different roles our theoretical and practical tools are designed to play. With no coherent strategy for coordinating them, none succeed.

Consider ethics. In discussions about emerging technologies, there is a tendency to treat ethics as though it offers the tools to answer all values questions. I suspect this is largely ethicists’ own fault: Historically, philosophy (the larger discipline of which ethics is a part) has mostly neglected technology as an object of investigation, leaving that work for others to do. (Which is not to say there aren’t brilliant philosophers working on these issues; there are. But they are a minority.) The result, as researchers from Delft University of Technology and Leiden University in the Netherlands have shown, is that the vast majority of scholarly work addressing issues related to technology ethics is being conducted by academics trained and working in other fields.

This makes it easy to forget that ethics is a specific area of inquiry with a specific purview. And like every other discipline, it offers tools designed to address specific problems. To create a world in which A.I. helps people flourish (rather than just generate profit), we need to understand what flourishing requires, how A.I. can help and hinder it, and what responsibilities individuals and institutions have for creating technologies that improve our lives. These are the kinds of questions ethics is designed to address, and critically important work in A.I. ethics has begun to shed light on them.

At the same time, we also need to understand why attempts at building “good technologies” have failed in the past, what incentives drive individuals and organizations not to build them even when they know they should, and what kinds of collective action can change those dynamics. To answer these questions, we need more than ethics. We need history, sociology, psychology, political science, economics, law, and the lessons of political activism. In other words, to tackle the vast and complex problems emerging technologies are creating, we need to integrate research and teaching around technology with all of the humanities and social sciences.

The info is here.

Are Placebo-Controlled, Relapse Prevention Trials in Schizophrenia Research Still Necessary or Ethical?

Ryan E. Lawrence, Paul S. Appelbaum, Jeffrey A. Lieberman
JAMA Psychiatry. Published online April 10, 2019.
doi:10.1001/jamapsychiatry.2019.0275

Randomized, placebo-controlled trials have been the gold standard for evaluating the safety and efficacy of new psychotropic drugs for more than half a century. Although the US Food and Drug Administration (FDA) does not require placebo-controlled trial data to approve new drugs or marketing indications, they have become the industry standard for psychotropic drug development.

Placebos are controversial. The FDA guidelines state “when a new treatment is tested for a condition for which no effective treatment is known, there is usually no ethical problem with a study comparing the new treatment to placebo.”1 However, “in cases where an available treatment is known to prevent serious harm, such as death or irreversible morbidity, it is generally inappropriate to use a placebo control”. When new antipsychotics are developed for schizophrenia, it can be debated which guideline applies.

From the Conclusion:

We believe the time has come to cease the use of placebo in relapse prevention studies and encourage the use of active comparators that would protect patients from relapse and provide information on the comparative effectiveness of the drugs studied. We recommend that pharmaceutical companies not seek maintenance labeling if it would require placebo-controlled, relapse prevention trials. However, for putative antipsychotics with a novel mechanism of action, placebo-controlled, relapse prevention trials may still be justifiable.

The info is here.

Monday, May 6, 2019

How do we make moral decisions?

Dartmouth College
Press Release
Originally released April 18, 2019

When it comes to making moral decisions, we often think of the golden rule: do unto others as you would have them do unto you. Yet, why we make such decisions has been widely debated. Are we motivated by feelings of guilt, where we don't want to feel bad for letting the other person down? Or by fairness, where we want to avoid unequal outcomes? Some people may rely on principles of both guilt and fairness and may switch their moral rule depending on the circumstances, according to a Radboud University - Dartmouth College study on moral decision-making and cooperation. The findings challenge prior research in economics, psychology and neuroscience, which is often based on the premise that people are motivated by one moral principle, which remains constant over time. The study was published recently in Nature Communications.

"Our study demonstrates that with moral behavior, people may not in fact always stick to the golden rule. While most people tend to exhibit some concern for others, others may demonstrate what we have called 'moral opportunism,' where they still want to look moral but want to maximize their own benefit," said lead author Jeroen van Baar, a postdoctoral research associate in the department of cognitive, linguistic and psychological sciences at Brown University, who started this research when he was a scholar at Dartmouth visiting from the Donders Institute for Brain, Cognition and Behavior at Radboud University.

"In everyday life, we may not notice that our morals are context-dependent since our contexts tend to stay the same daily. However, under new circumstances, we may find that the moral rules we thought we'd always follow are actually quite malleable," explained co-author Luke J. Chang, an assistant professor of psychological and brain sciences and director of the Computational Social Affective Neuroscience Laboratory (Cosan Lab) at Dartmouth. "This has tremendous ramifications if one considers how our moral behavior could change under new contexts, such as during war," he added.

The info is here.

The research is here.

Ethical Considerations Regarding Internet Searches for Patient Information.

Charles C. Dike, Philip Candilis, Barbara Kocsis  and others
Psychiatric Services
Published Online:17 Jan 2019

Abstract

In 2010, the American Medical Association developed policies regarding professionalism in the use of social media, but it did not present specific ethical guidelines on targeted Internet searches for information about a patient or the patient’s family members. The American Psychiatric Association (APA) provided some guidance in 2016 through the Opinions of the Ethics Committee, but published opinions are limited. On behalf of the APA Ethics Committee, the authors developed a resource document describing ethical considerations regarding Internet and social media searches for patient information, from which this article has been adapted. Recommendations include the following. Except in emergencies, it is advisable to obtain a patient’s informed consent before performing such a search. The psychiatrist should be aware of his or her motivations for performing a search and should avoid doing so unless it serves the patient’s best interests. Information obtained through such searches should be handled with sensitivity regarding the patient’s privacy. The psychiatrist should consider how the search might influence the clinician-patient relationship. When interpreted with caution, Internet- and social media–based information may be appropriate to consider in forensic evaluations.

The info is here.

Sunday, May 5, 2019

When a Colleague Dies, CEOs Change How They Lead

Guoli Chen
www.barrons.com
Originally posted April 8, 2019

Here is an excerpt:

A version of my research, “That Could Have Been Me: Director Deaths, CEO Mortality Salience and Corporate Prosocial Behavior” (co-authored with Craig Crossland and Sterling Huang and forthcoming in Management Science) notes the significant impact a director’s death can have on resource allocation within a firm and on CEO’s activities, both outside and inside the organization.

For example, we saw that CEOs who’d experienced the death of a director on their boards reduced the number of outside directorships they held in the publicly listed firms. At the same time, they increased their number of directorships in non-profit organizations. It seems that thoughts of mortality had inspired a desire to make a lasting, positive contribution to society, or to jettison some priorities in favor of more pro-social ones.

We also saw differences in how CEOs led their firms. In our study, which looked at statistics taken from public firms where a director had died in the years between 1990 and 2013 and compared them with similar firms where no director had died, we saw that CEOs who’d experienced the death of a close colleague spend less efforts on the firms’ immediate growth or financial return activities. We found that there is an increase of costs-of-goods-sold, and companies they lead become less aggressive in expanding their assets and firm size, after the director death events. It could be due to the “quiet life” or “withdrawal behavior” hypotheses which suggest that CEOs become less engaged with the corporate activities after they realize the finite of life span. They may shift their time and focus from corporate to family or community activities.

Meanwhile we also observed that firms lead by these CEOs after the director death experienced an increase their corporate social responsibility (CSR) activities. CEOs with a heightened awareness of deaths will influence their firms resource allocation towards activities that provide benefits to broader stakeholders, such as employee health plans, more environmentally-friendly manufacturing processes, and charitable contributions.

The info is here.

Saturday, May 4, 2019

Moral Grandstanding in Public Discourse

Joshua Grubbs, Brandon Warmke, Justin Tosi, & Alicia James
PsyArXiv Preprints
Originally posted April 5, 2019

Abstract

Public discourse is often caustic and conflict-filled. This trend seems to be particularly evident when the content of such discourse is around moral issues (broadly defined) and when the discourse occurs on social media. Several explanatory mechanisms for such conflict have been explored in recent psychological and social-science literatures. The present work sought to examine a potentially novel explanatory mechanism defined in philosophical literature: Moral Grandstanding. According to philosophical accounts, Moral Grandstanding is the use of moral talk to seek social status. For the present work, we conducted five studies, using two undergraduate samples (Study 1, N = 361; Study 2, N = 356); an sample matched to U.S. norms for age, gender, race, income, Census region (Study 3, N = 1,063); a YouGov sample matched to U.S. demographic norms (Study 4, N = 2,000); and a brief, one-month longitudinal study of Mechanical Turk workers in the U.S. (Study 5 , Baseline N = 499, follow-up n = 296). Across studies, we found initial support for the validity of Moral Grandstanding as a construct. Specifically, moral grandstanding was associated with status-seeking personality traits, as well as greater political and moral conflict in daily life.

Here is part of the Conclusion:

Public discourse regarding morally charged topics is prone to conflict and polarization, particularly on social media platforms that tend to facilitate ideological echo chambers. The present study introduces an interdisciplinary construct called Moral Grandstanding as possible a contributing factor to this phenomenon. MG links evolutionary psychology and moral philosophy to describe the use of public moral speech to enhance one’s status or image in the eyes of others. Specifically, MG is framed as an expression of status-seeking drives in the domain of public discourse. Self-reported motivations underlying grandstanding behaviors seem to be consistent with the construct of status-seeking more broadly, seeming to represent prestige and dominance striving, both of which were found to be associated with greater interpersonal conflict and polarization.

The research is here.

Friday, May 3, 2019

Real or artificial? Tech titans declare AI ethics concerns

Matt O'Brien and Rachel Lerman
Associated Press
Originally posted April 7, 2019

Here is an excerpt:

"Ethical AI" has become a new corporate buzz phrase, slapped on internal review committees, fancy job titles, research projects and philanthropic initiatives. The moves are meant to address concerns over racial and gender bias emerging in facial recognition and other AI systems, as well as address anxieties about job losses to the technology and its use by law enforcement and the military.

But how much substance lies behind the increasingly public ethics campaigns? And who gets to decide which technological pursuits do no harm?

Google was hit with both questions when it formed a new board of outside advisers in late March to help guide how it uses AI in products. But instead of winning over potential critics, it sparked internal rancor. A little more than a week later, Google bowed to pressure from the backlash and dissolved the council.

The outside board fell apart in stages. One of the board's eight inaugural members quit within days and another quickly became the target of protests from Google employees who said her conservative views don't align with the company's professed values.

As thousands of employees called for the removal of Heritage Foundation President Kay Coles James, Google disbanded the board last week.

"It's become clear that in the current environment, (the council) can't function as we wanted," the company said in a statement.

The info is here.

Fla. healthcare executive found guilty in $1B Medicare fraud case

Associated Press 
Modern Healthcare
Originally published April 5, 2019

Florida healthcare executive Philip Esformes was found guilty Friday of paying and receiving kickbacks and other charges as part of the biggest Medicare fraud case in U.S. history.

During the seven-week trial in federal court in Miami, prosecutors called Esformes a trickster and mastermind of a scheme paying bribes and kickbacks to doctors to refer patients to his nursing home network from 2009 to 2016. The fraud also included paying off a regulator to learn when inspectors would make surprise visits to his facilities, or if patients had made complaints.

Esformes owns dozens of Miami-Dade nursing facilities as well as homes in Miami, Los Angeles and Chicago.

The info is here.

Thursday, May 2, 2019

A Facebook request: Write a code of tech ethics

A Facebook request: Write a code of tech ethicsMike Godwin
www.latimes.com
Originally published April 30, 2019

Facebook is preparing to pay a multi-billion-dollar fine and dealing with ongoing ire from all corners for its user privacy lapses, the viral transmission of lies during elections, and delivery of ads in ways that skew along gender and racial lines. To grapple with these problems (and to get ahead of the bad PR they created), Chief Executive Mark Zuckerberg has proposed that governments get together and set some laws and regulations for Facebook to follow.

But Zuckerberg should be aiming higher. The question isn’t just what rules should a reformed Facebook follow. The bigger question is what all the big tech companies’ relationships with users should look like. The framework needed can’t be created out of whole cloth just by new government regulation; it has to be grounded in professional ethics.

Doctors and lawyers, as they became increasingly professionalized in the 19th century, developed formal ethical codes that became the seeds of modern-day professional practice. Tech-company professionals should follow their example. An industry-wide code of ethics could guide companies through the big questions of privacy and harmful content.

The info is here.

Editor's note: Many social media companies engage in unethical behavior on a regular basis, typically revolving around lack of consent, lack of privacy standards, filter bubble (personalized algorithms) issues, lack of accountability, lack of transparency, harmful content, and third party use of data.

Part-revived pig brains raise slew of ethical quandaries

Nita A. Farahany, Henry T. Greely & Charles M. Giattino
Nature
Originally published April 17, 2019

Scientists have restored and preserved some cellular activities and structures in the brains of pigs that had been decapitated for food production four hours before. The researchers saw circulation in major arteries and small blood vessels, metabolism and responsiveness to drugs at the cellular level and even spontaneous synaptic activity in neurons, among other things. The team formulated a unique solution and circulated it through the isolated brains using a network of pumps and filters called BrainEx. The solution was cell-free, did not coagulate and contained a haemoglobin-based oxygen carrier and a wide range of pharmacological agents.

The remarkable study, published in this week’s Nature, offers the promise of an animal or even human whole-brain model in which many cellular functions are intact. At present, cells from animal and human brains can be sustained in culture for weeks, but only so much can be gleaned from isolated cells. Tissue slices can provide snapshots of local structural organization, yet they are woefully inadequate for questions about function and global connectivity, because much of the 3D structure is lost during tissue preparation.

The work also raises a host of ethical issues. There was no evidence of any global electrical activity — the kind of higher-order brain functioning associated with consciousness. Nor was there any sign of the capacity to perceive the environment and experience sensations. Even so, because of the possibilities it opens up, the BrainEx study highlights potential limitations in the current regulations for animals used in research.

The info is here.

Wednesday, May 1, 2019

Chinese scientists create super monkeys by injecting brains with human DNA

Harriet Brewis
www.msn.com
Originally published April 13, 2019

Chinese scientists have created super-intelligent monkeys by injecting them with human DNA.

Researchers transferred a gene linked to brain development, called MCPH1, into rhesus monkey embryos.

Once they were born, the monkeys were found to have better memories, reaction times and processing abilities than their untouched peers.

"This was the first attempt to understand the evolution of human cognition using a transgenic monkey model," said Bing Su, a geneticist at Kunming Institute of Zoology in China.

The research was conducted by Dr Su’s team at the Kunming Institute of Zoology, in collaboration with the Chinese Academy of Sciences and University of North Carolina in the US.

“Our findings demonstrated that nonhuman primates (excluding ape species) have the potential to provide important – and potentially unique – insights into basic questions of what actually makes human unique,” the authors wrote in the study.

The info is here.

The U.S. Healthcare Cost Crisis

Gallup
Report issued April 2019

Executive Summary

The high cost of healthcare in the United States is a significant source of apprehension and fear for millions of Americans, according to a new national survey by West Health and Gallup.

Relative to the quality of the care they receive, Americans overwhelmingly agree they pay too much, and receive too little, and few have confidence that elected officials can solve the problem.

Americans in large numbers are borrowing money, skipping treatments and cutting back on household expenses because of high costs, and a large percentage fear a major health event could bankrupt them. More than three-quarters of Americans are also concerned that high healthcare costs could cause significant and lasting damage to the U.S. economy.

Despite the financial burden and fears caused by high healthcare costs, partisan filters lead to divergent views of the healthcare system at large: By a wide margin, more Republicans than Democrats consider the quality of care in the U.S. to be the best or among the best in the world — all while the U.S. significantly outspends other advanced economies on healthcare with dismal outcomes on basic health indicators such as infant mortality and heart attack mortality.

Republicans and Democrats are about as likely to resort to drastic measures, from deferring care to cutting back on other expenses including groceries, clothing, and gas and electricity. And many do not see the situation improving. In fact, most believe costs will only increase. When given the choice between a freeze in healthcare costs for the next five years or a 10% increase in household
income, 61% of Americans report that their preference is a freeze in costs.

West Health and Gallup’s major study included interviews with members of Gallup’s National Panel of Households and healthcare industry experts as well as a nationally representative survey of more than 3,537 randomly selected adults.

The report can be downloaded here.

Tuesday, April 30, 2019

Ethics in AI Are Not Optional

Rob Daly
www.marketsmedia.com
Originally posted April 12, 2019

Artificial intelligence is a critical feature in the future of the financial services, but firms should not be penny-wise and pound-foolish in their race to develop the most advanced offering as possible, caution experts.

“You do not need to be on the frontier of technology if you are not a technology company,” said Greg Baxter, the chief digital officer at MetLife, in his keynote address during Celent’s annual Innovation and Insight Day. “You just have to permit your people to use the technology.”

More effort should be spent on developing the various policies that will govern the deployment of the technology, he added.

MetLife spends more time on ethics and legal than it does with technology, according to Baxter.

Firms should be wary when implementing AI in such a fashion that it alienates clients by being too intrusive and ruining the customer experience. “If data is the new currency, its credit line is trust,” said Baxter.

The info is here.

Should animals, plants, and robots have the same rights as you?

Sigal Samuel
www.vox.com
Originally posted April 4, 2019

Here is an excerpt:

The moral circle is a fundamental concept among philosophers, psychologists, activists, and others who think seriously about what motivates people to do good. It was introduced by historian William Lecky in the 1860s and popularized by philosopher Peter Singer in the 1980s.

Now it’s cropping up more often in activist circles as new social movements use it to make the case for granting rights to more and more entities. Animals. Nature. Robots. Should they all get rights similar to the ones you enjoy? For example, you have the right not to be unjustly imprisoned (liberty) and the right not to be experimented on (bodily integrity). Maybe animals should too.

If you’re tempted to dismiss that notion as absurd, ask yourself: How do you decide whether an entity deserves rights?

Many people think that sentience, the ability to feel sensations like pain and pleasure, is the deciding factor. If that’s the case, what degree of sentience is required to make the cut? Maybe you think we should secure legal rights for chimpanzees and elephants — as the Nonhuman Rights Project is aiming to do — but not for, say, shrimp.

Some people think sentience is the wrong litmus test; they argue we should include anything that’s alive or that supports living things. Maybe you think we should secure rights for natural ecosystems, as the Community Environmental Legal Defense Fund is doing. Lake Erie won legal personhood status in February, and recent years have seen rights granted to rivers and forests in New Zealand, India, and Colombia.

The info is here.

Monday, April 29, 2019

How Trump has changed white evangelicals’ views about morality

David Campbell and Geoffrey Layman
The Washington Post
Originally published April 25, 2019

Recently, Democratic presidential candidate Pete Buttigieg has been criticizing religious conservatives — especially Vice President Pence — for supporting President Trump, despite his lewd behavior. To drive home the point, Buttigieg often refers to Trump as the “porn star president.”

We were curious about the attitudes of rank-and-file evangelicals. After more than two years of Trump in the White House, how do they feel about a president’s private morality?

From 2011 to 2016, white evangelicals dramatically changed their minds about the importance of politicians’ private behavior

Back in 2016, many journalists and commentators pointed out a stunning change in how white evangelicals perceived the connection between private and public morality. In 2011, a poll conducted by the Public Religion Research Institute (PRRI) and the Religion News Service found that 60 percent of white evangelicals believed that a public official who “commits an immoral act in their personal life” cannot still “behave ethically and fulfill their duties in their public and professional life.” But in an October 2016 poll by PRRI and the Brookings Institution — after the release of the infamous “Access Hollywood” tape — only 20 percent of evangelicals, answering the same question, said that private immorality meant someone could not behave ethically in public.



The info is here.

Nova Scotia to become 1st in North America with presumed consent for organ donation

Michael Gorman
www.cbc.com
Originally posted April 2, 2019

Here is an excerpt:

Premier Stephen McNeil said the bill fills a need within the province, noting Nova Scotia has some of the highest per capita rates of willing donors in the country.

"That doesn't always translate into the actual act of giving," he said.

"We know that there are many ways that we can continue to improve the system that we have."

McNeil pledged to put the necessary services in place to allow the province's donor program to live up to the promise of the legislation.

"We know that in many parts of our province — including the one I live in, which is a rural part of Nova Scotia — we have work to do," he said.

"I will make sure that the work that is required to build the system and supports around this will happen."

The bill will not be proclaimed right away.

Health Minister Randy Delorey said government officials would spend 12-18 months educating the public about the change and working on getting health-care workers the support they need to enhance the program.

Even with the change, Delorey said, people should continue making their wishes known to loved ones, so there can be no question about intentions.

The info is here.

Sunday, April 28, 2019

No Support for Historical Candidate Gene or Candidate Gene-by-Interaction Hypotheses for Major Depression Across Multiple Large Samples

Richard Border, Emma C. Johnson, and others
The American Journal of Psychiatry
https://doi.org/10.1176/appi.ajp.2018.18070881

Abstract

Objective:
Interest in candidate gene and candidate gene-by-environment interaction hypotheses regarding major depressive disorder remains strong despite controversy surrounding the validity of previous findings. In response to this controversy, the present investigation empirically identified 18 candidate genes for depression that have been studied 10 or more times and examined evidence for their relevance to depression phenotypes.

Methods:
Utilizing data from large population-based and case-control samples (Ns ranging from 62,138 to 443,264 across subsamples), the authors conducted a series of preregistered analyses examining candidate gene polymorphism main effects, polymorphism-by-environment interactions, and gene-level effects across a number of operational definitions of depression (e.g., lifetime diagnosis, current severity, episode recurrence) and environmental moderators (e.g., sexual or physical abuse during childhood, socioeconomic adversity).

Results:
No clear evidence was found for any candidate gene polymorphism associations with depression phenotypes or any polymorphism-by-environment moderator effects. As a set, depression candidate genes were no more associated with depression phenotypes than noncandidate genes. The authors demonstrate that phenotypic measurement error is unlikely to account for these null findings.

Conclusions:
The study results do not support previous depression candidate gene findings, in which large genetic effects are frequently reported in samples orders of magnitude smaller than those examined here. Instead, the results suggest that early hypotheses about depression candidate genes were incorrect and that the large number of associations reported in the depression candidate gene literature are likely to be false positives.

The research is here.

Editor's note: Depression is a complex, multivariate experience that is not primarily genetic in its origins.

Saturday, April 27, 2019

When Would a Robot Have Free Will?

Eddy Nahmias
The NeuroEthics Blog
Originally posted April 1, 2019

Here are two excerpts:

Joshua Shepherd (2015) had found evidence that people judge humanoid robots that behave like humans and are described as conscious to be free and responsible more than robots that carry out these behaviors without consciousness. We wanted to explore what sorts of consciousness influence attributions of free will and moral responsibility—i.e., deserving praise and blame for one’s actions. We developed several scenarios describing futuristic humanoid robots or aliens, in which they were described as either having or as lacking: conscious sensations, conscious emotions, and language and intelligence. We found that people’s attributions of free will generally track their attributions of conscious emotions more than attributions of conscious sensory experiences or intelligence and language. Consistent with this, we also found that people are more willing to attribute free will to aliens than robots, and in more recent studies, we see that people also attribute free will to many animals, with dolphins and dogs near the levels attributed to human adults.

These results suggest two interesting implications. First, when philosophers analyze free will in terms of the control required to be morally responsible—e.g., being ‘reasons-responsive’—they may be creating a term of art (perhaps a useful one). Laypersons seem to distinguish the capacity to have free will from the capacities required to be responsible. Our studies suggest that people may be willing to hold intelligent but non-conscious robots or aliens responsible even when they are less willing to attribute to them free will.

(cut)

A second interesting implication of our results is that many people seem to think that having a biological body and conscious feelings and emotions are important for having free will. The question is: why? Philosophers and scientists have often asserted that consciousness is required for free will, but most have been vague about what the relationship is. One plausible possibility we are exploring is that people think that what matters for an agent to have free will is that things can really matter to the agent. And for anything to matter to an agent, she has to be able to care—that is, she has to have foundational, intrinsic motivations that ground and guide her other motivations and decisions.

The info is here.

Friday, April 26, 2019

EU beats Google to the punch in setting strategy for ethical A.I.

Elizabeth Schulze
www.CNBC.com
Originally posted April 8, 2019

Less than one week after Google scrapped its AI ethics council, the European Union has set out its own guidelines for achieving “trustworthy” artificial intelligence.

On Monday, the European Commission released a set of steps to maintain ethics in artificial intelligence, as companies and governments weigh both the benefits and risks of the far-reaching technology.

“The ethical dimension of AI is not a luxury feature or an add-on,” said Andrus Ansip, EU vice-president for the digital single market, in a press release Monday. “It is only with trust that our society can fully benefit from technologies.”

The EU defines artificial intelligence as systems that show “intelligent behavior,” allowing them to analyze their environment and perform tasks with some degree of autonomy. AI is already transforming businesses in a variety of functions, like automating repetitive tasks and analyzing troves of data. But the technology raises a series of ethical questions, such as how to ensure algorithms are programmed without bias and how to hold AI accountable if something goes wrong.

The info is here.

Social media giants no longer can avoid moral compass

Don Hepburn
thehill.com
Originally published April 1, 2019

Here is an excerpt:

There are genuine moral, legal and technical dilemmas in addressing the challenges raised by the ubiquitous nature of the not-so-new social media conglomerates. Why, then, are social media giants avoiding the moral compass, evading legal guidelines and ignoring technical solutions available to them? The answer is, their corporate culture refuses to be held accountable to the same standards the public has applied to all other global corporations for the past five decades.

A wholesale change of culture and leadership is required within the social media industry. The culture of “everything goes” because “we are the future” needs to be more than tweaked; it must come to an end. Like any large conglomerate, social media platforms cannot ignore the public’s demand that they act with some semblance of responsibility. Just like the early stages of the U.S. coal, oil and chemical industries, the social media industry is impacting not only our physical environment but the social good and public safety. No serious journalism organization would ever allow a stranger to write their own hate-filled stories (with photos) for their newspaper’s daily headline — that’s why there’s a position called editor-in-chief.

If social media giants insist they are open platforms, then anyone can purposefully exploit them for good or evil. But if social media platforms demonstrate no moral or ethical standards, they should be subject to some form of government regulation. We have regulatory environments where we see the need to protect the public good against the need for profit-driven enterprises; why should social media platforms be given preferential treatment?

The info is here.

Thursday, April 25, 2019

The New Science of How to Argue—Constructively

Jesse Singal
The Atlantic
Originally published April 7, 2019

Here is an excerpt:

Once you know a term like decoupling, you can identify instances in which a disagreement isn’t really about X anymore, but about Y and Z. When some readers first raised doubts about a now-discredited Rolling Stone story describing a horrific gang rape at the University of Virginia, they noted inconsistencies in the narrative. Others insisted that such commentary fit into destructive tropes about women fabricating rape claims, and therefore should be rejected on its face. The two sides weren’t really talking; one was debating whether the story was a hoax, while the other was responding to the broader issue of whether rape allegations are taken seriously. Likewise, when scientists bring forth solid evidence that sexual orientation is innate, or close to it, conservatives have lashed out against findings that would “normalize” homosexuality. But the dispute over which sexual acts, if any, society should discourage is totally separate from the question of whether sexual orientation is, in fact, inborn. Because of a failure to decouple, people respond indignantly to factual claims when they’re actually upset about how those claims might be interpreted.

Nerst believes that the world can be divided roughly into “high decouplers,” for whom decoupling comes easy, and “low decouplers,” for whom it does not. This is the sort of area where erisology could produce empirical insights: What characterizes people’s ability to decouple? Nerst believes that hard-science types are better at it, on average, while artistic types are worse. After all, part of being an artist is seeing connections where other people don’t—so maybe it’s harder for them to not see connections in some cases. Nerst might be wrong. Either way, it’s the sort of claim that could be fairly easily tested if the discipline caught on.

The info is here.

The Brave New World of Sex Robots

Mark Wolverton
undark.org
Originally posted March 29, 2019

Here is an excerpt:

But as the technology develops apace, so are a host of other issues, including political and social ones (Why such emphasis on feminine bots rather than male? Do sexbots really need a “gender” at all?); philosophical and ethical ones (Is sex with a robot really “sex”? What if the robots are sentient?); and legal ones (Does sex with a robot count as cheating on your human partner?)

Many of these concerns overlap with present controversies regarding AI in general, but in this realm, tied so closely with the most profound manifestations of human intimacy, they feel more personal and controversial. Perhaps as a result, Devlin has a self-admitted tendency at times to slip into somewhat heavy-handed feminist polemics, which can overshadow or obscure possible alternative interpretations to some questions — it’s arguable whether the “Blade Runner” films have “a woman problem,” for example, or whether the prevalence of sexbots with idealized and identifiably feminine aesthetics is solely a result of “male objectification.”

Informed by her background as a computer scientist, Devlin provides excellent nuts-and-bolts technical explanations of the fundamentals of machine learning, neural networks, and language processing that provide the necessary foundation for her explorations of the subject, whose sometimes sensitive nature is eased by her sly sense of humor.

The info is here.

Wednesday, April 24, 2019

134 Activities to Add to Your Self-Care Plan

GoodTherapy.org Staff
www.goodtherapy.org
Originally posted June 13, 2015

At its most basic definition, self-care is any intentional action taken to meet an individual’s physical, mental, spiritual, or emotional needs. In short, it’s all the little ways we take care of ourselves to avoid a breakdown in those respective areas of health.

You may find that, at certain points, the world and the people in it place greater demands on your time, energy, and emotions than you might feel able to handle. This is precisely why self-care is so important. It is the routine maintenance you need do to function your best not only for others, but also for yourself.

GoodTherapy.org’s own business and administrative, web development, outreach and advertising, editorial and education, and support teams have compiled a massive list of some of their own personal self-care activities to offer some help for those struggling to come up with their own maintenance plan. Next time you find yourself saying, “I really need to do something for myself,” browse our list and pick something that speaks to you. Be silly, be caring to others, and make your self-care a priority! In most cases, taking care of yourself doesn’t even have to cost anything. And because self-care is as unique as the individual performing it, we’d love to invite you to comment and add any of your own personal self-care activities in the comments section below. Give back to your fellow readers and share some of the little ways you take care of yourself.

The list is here.

Note: Self-care enhances the possibility of competence practice.  Good self-care skills are important to promote ethical practice.

The Growing Marketplace For AI Ethics

Forbes Insight Team
Forbes.com
Originally posted March 27, 2019

As companies have raced to adopt artificial intelligence (AI) systems at scale, they have also sped through, and sometimes spun out, in the ethical obstacle course AI often presents.

AI-powered loan and credit approval processes have been marred by unforeseen bias. Same with recruiting tools. Smart speakers have secretly turned on and recorded thousands of minutes of audio of their owners.

Unfortunately, there’s no industry-standard, best-practices handbook on AI ethics for companies to follow—at least not yet. Some large companies, including Microsoft and Google, are developing their own internal ethical frameworks.

A number of think tanks, research organizations, and advocacy groups, meanwhile, have been developing a wide variety of ethical frameworks and guidelines for AI. Below is a brief roundup of some of the more influential models to emerge—from the Asilomar Principles to best-practice recommendations from the AI Now Institute.

“Companies need to study these ethical frameworks because this is no longer a technology question. It’s an existential human one,” says Hanson Hosein, director of the Communication Leadership program at the University of Washington. “These questions must be answered hand-in-hand with whatever’s being asked about how we develop the technology itself.”

The info is here.

Tuesday, April 23, 2019

4 Ways Lying Becomes the Norm at a Company

Ron Carucci
Harvard Business Review
Originally published February 15, 2019

Many of the corporate scandals in the past several years — think Volkswagen or Wells Fargo — have been cases of wide-scale dishonesty. It’s hard to fathom how lying and deceit permeated these organizations. Some researchers point to group decision-making processes or psychological traps that snare leaders into justification of unethical choices. Certainly those factors are at play, but they largely explain dishonest behavior at an individual level and I wondered about systemic factors that might influence whether or not people in organizations distort or withhold the truth from one another.

This is what my team set out to understand through a 15-year longitudinal study. We analyzed 3,200 interviews that were conducted as part of 210 organizational assessments to see whether there were factors that predicted whether or not people inside a company will be honest. Our research yielded four factors — not individual character traits, but organizational issues — that played a role. The good news is that these factors are completely within a corporation’s control and improving them can make your company more honest, and help avert the reputation and financial disasters that dishonesty can lead to.

The stakes here are high. Accenture’s Competitive Agility Index — a 7,000-company, 20-industry analysis, for the first time tangibly quantified how a decline in stakeholder trust impacts a company’s financial performance. The analysis reveals more than half (54%) of companies on the index experienced a material drop in trust — from incidents such as product recalls, fraud, data breaches and c-suite missteps — which equates to a minimum of $180 billion in missed revenues. Worse, following a drop in trust, a company’s index score drops 2 points on average, negatively impacting revenue growth by 6% and EBITDA by 10% on average.

The info is here.

How big tech designs its own rules of ethics to avoid scrutiny and accountability

David Watts
theconversaton.com
Originally posted March 28, 2019

Here is an excerpt:

“Applied ethics” aims to bring the principles of ethics to bear on real-life situations. There are numerous examples.

Public sector ethics are governed by law. There are consequences for those who breach them, including disciplinary measures, termination of employment and sometimes criminal penalties. To become a lawyer, I had to provide evidence to a court that I am a “fit and proper person”. To continue to practice, I’m required to comply with detailed requirements set out in the Australian Solicitors Conduct Rules. If I breach them, there are consequences.

The features of applied ethics are that they are specific, there are feedback loops, guidance is available, they are embedded in organisational and professional culture, there is proper oversight, there are consequences when they are breached and there are independent enforcement mechanisms and real remedies. They are part of a regulatory apparatus and not just “feel good” statements.

Feelgood, high-level data ethics principles are not fit for the purpose of regulating big tech. Applied ethics may have a role to play but because they are occupation or discipline specific they cannot be relied on to do all, or even most of, the heavy lifting.

The info is here.

Monday, April 22, 2019

Moral identity relates to the neural processing of third-party moral behavior

Carolina Pletti, Jean Decety, & Markus Paulus
Social Cognitive and Affective Neuroscience
https://doi.org/10.1093/scan/nsz016

Abstract

Moral identity, or moral self, is the degree to which being moral is important to a person’s self-concept. It is hypothesized to be the “missing link” between moral judgment and moral action. However, its cognitive and psychophysiological mechanisms are still subject to debate. In this study, we used Event-Related Potentials (ERPs) to examine whether the moral self concept is related to how people process prosocial and antisocial actions. To this end, participants’ implicit and explicit moral self-concept was assessed. We examined whether individual differences in moral identity relate to differences in early, automatic processes (i.e. EPN, N2) or late, cognitively controlled processes (i.e. LPP) while observing prosocial and antisocial situations. Results show that a higher implicit moral self was related to a lower EPN amplitude for prosocial scenarios. In addition, an enhanced explicit moral self was related to a lower N2 amplitude for prosocial scenarios. The findings demonstrate that the moral self affects the neural processing of morally relevant stimuli during third-party evaluations. They support theoretical considerations that the moral self already affects (early) processing of moral information.

Here is the conclusion:

Taken together, notwithstanding some limitations, this study provides novel insights into the
nature of the moral self. Importantly, the results suggest that the moral self concept influences the
early processing of morally relevant contexts. Moreover, the implicit and the explicit moral self
concepts have different neural correlates, influencing respectively early and intermediate processing
stages. Overall, the findings inform theoretical approaches on how the moral self informs social
information processing (Lapsley & Narvaez, 2004).

Psychiatry’s Incurable Hubris

Gary Greenberg
The Atlantic
April 2019 issue

Here is an excerpt:

The need to dispel widespread public doubt haunts another debacle that Harrington chronicles: the rise of the “chemical imbalance” theory of mental illness, especially depression. The idea was first advanced in the early 1950s, after scientists demonstrated the principles of chemical neurotransmission; it was supported by the discovery that consciousness-altering drugs such as LSD targeted serotonin and other neurotransmitters. The idea exploded into public view in the 1990s with the advent of direct-to-consumer advertising of prescription drugs, antidepressants in particular. Harrington documents ad campaigns for Prozac and Zoloft that assured wary customers the new medications were not simply treating patients’ symptoms by altering their consciousness, as recreational drugs might. Instead, the medications were billed as repairing an underlying biological problem.

The strategy worked brilliantly in the marketplace. But there was a catch. “Ironically, just as the public was embracing the ‘serotonin imbalance’ theory of depression,” Harrington writes, “researchers were forming a new consensus” about the idea behind that theory: It was “deeply flawed and probably outright wrong.” Stymied, drug companies have for now abandoned attempts to find new treatments for mental illness, continuing to peddle the old ones with the same claims. And the news has yet to reach, or at any rate affect, consumers. At last count, more than 12 percent of Americans ages 12 and older were taking antidepressants. The chemical-imbalance theory, like the revamped DSM, may fail as science, but as rhetoric it has turned out to be a wild success.

The info is here.

Sunday, April 21, 2019

Microsoft will be adding AI ethics to its standard checklist for product release

Alan Boyle
www.geekwire.com
Originally posted March 25, 2019

Microsoft will “one day very soon” add an ethics review focusing on artificial-intelligence issues to its standard checklist of audits that precede the release of new products, according to Harry Shum, a top executive leading the company’s AI efforts.

AI ethics will join privacy, security and accessibility on the list, Shum said today at MIT Technology Review’s EmTech Digital conference in San Francisco.

Shum, who is executive vice president of Microsoft’s AI and Research group, said companies involved in AI development “need to engineer responsibility into the very fabric of the technology.”

Among the ethical concerns are the potential for AI agents to pick up all-too-human biases from the data on which they’re trained, to piece together privacy-invading insights through deep data analysis, to go horribly awry due to gaps in their perceptual capabilities, or simply to be given too much power by human handlers.

Shum noted that as AI becomes better at analyzing emotions, conversations and writings, the technology could open the way to increased propaganda and misinformation, as well as deeper intrusions into personal privacy.

The info is here.

Saturday, April 20, 2019

Cardiologist Eric Topol on How AI Can Bring Humanity Back to Medicine

Alice Park
Time.com
Originally published March 14, 2019

Here is an excerpt:

What are the best examples of how AI can work in medicine?

We’re seeing rapid uptake of algorithms that make radiologists more accurate. The other group already deriving benefit is ophthalmologists. Diabetic retinopathy, which is a terribly underdiagnosed cause of blindness and a complication of diabetes, is now diagnosed by a machine with an algorithm that is approved by the Food and Drug Administration. And we’re seeing it hit at the consumer level with a smart-watch app with a deep learning algorithm to detect atrial fibrillation.

Is that really artificial intelligence, in the sense that the machine has learned about medicine like doctors?

Artificial intelligence is different from human intelligence. It’s really about using machines with software and algorithms to ingest data and come up with the answer, whether that data is what someone says in speech, or reading patterns and classifying or triaging things.

What worries you the most about AI in medicine?

I have lots of worries. First, there’s the issue of privacy and security of the data. And I’m worried about whether the AI algorithms are always proved out with real patients. Finally, I’m worried about how AI might worsen some inequities. Algorithms are not biased, but the data we put into those algorithms, because they are chosen by humans, often are. But I don’t think these are insoluble problems.

The info is here.

Friday, April 19, 2019

Leader's group-norm violations elicit intentions to leave the group – If the group-norm is not affirmed

Lara Ditrich, AdrianLüders, Eva Jonas, & Kai Sassenberg
Journal of Experimental Social Psychology
Available online 2 April 2019

Abstract

Group members, even central ones like group leaders, do not always adhere to their group's norms and show norm-violating behavior instead. Observers of this kind of behavior have been shown to react negatively in such situations, and in extreme cases, may even leave their group. The current work set out to test how this reaction might be prevented. We assumed that group-norm affirmations can buffer leaving intentions in response to group-norm violations and tested three potential mechanisms underlying the buffering effect of group-norm affirmations. To this end, we conducted three experiments in which we manipulated group-norm violations and group-norm affirmations. In Study 1, we found group-norm affirmations to buffer leaving intentions after group-norm violations. However, we did not find support for the assumption that group-norm affirmations change how a behavior is evaluated or preserve group members' identification with their group. Thus, neither of these variables can explain the buffering effect of group-norm affirmations. Studies 2 & 3 revealed that group-norm affirmations instead reduce perceived effectiveness of the norm-violator, which in turn predicted lower leaving intentions. The present findings will be discussed based on previous research investigating the consequences of norm violations.

The research is here.