Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Sunday, March 31, 2019

Is Ethical A.I. Even Possible?

Cade Metz
The New York Times
Originally posted March 1, 2019

Here is an excerpt:

As companies and governments deploy these A.I. technologies, researchers are also realizing that some systems are woefully biased. Facial recognition services, for instance, can be significantly less accurate when trying to identify women or someone with darker skin. Other systems may include security holes unlike any seen in the past. Researchers have shown that driverless cars can be fooled into seeing things that are not really there.

All this means that building ethical artificial intelligence is an enormously complex task. It gets even harder when stakeholders realize that ethics are in the eye of the beholder.

As some Microsoft employees protest the company’s military contracts, Mr. Smith said that American tech companies had long supported the military and that they must continue to do so. “The U.S. military is charged with protecting the freedoms of this country,” he told the conference. “We have to stand by the people who are risking their lives.”

Though some Clarifai employees draw an ethical line at autonomous weapons, others do not. Mr. Zeiler argued that autonomous weapons will ultimately save lives because they would be more accurate than weapons controlled by human operators. “A.I. is an essential tool in helping weapons become more accurate, reducing collateral damage, minimizing civilian casualties and friendly fire incidents,” he said in a statement.

The info is here.

Saturday, March 30, 2019

AI Safety Needs Social Scientists

Geoffrey Irving and Amanda Askell
distill.pub
Originally published February 19, 2019

Here is an excerpt:

Learning values by asking humans questions

We start with the premise that human values are too complex to describe with simple rules. By “human values” we mean our full set of detailed preferences, not general goals such as “happiness” or “loyalty”. One source of complexity is that values are entangled with a large number of facts about the world, and we cannot cleanly separate facts from values when building ML models. For example, a rule that refers to “gender” would require an ML model that accurately recognizes this concept, but Buolamwini and Gebru found that several commercial gender classifiers with a 1% error rate on white men failed to recognize black women up to 34% of the time. Even where people have correct intuition about values, we may be unable to specify precise rules behind these intuitions. Finally, our values may vary across cultures, legal systems, or situations: no learned model of human values will be universally applicable.

If humans can’t reliably report the reasoning behind their intuitions about values, perhaps we can make value judgements in specific cases. To realize this approach in an ML context, we ask humans a large number of questions about whether an action or outcome is better or worse, then train on this data. “Better or worse” will include both factual and value-laden components: for an AI system trained to say things, “better” statements might include “rain falls from clouds”, “rain is good for plants”, “many people dislike rain”, etc. If the training works, the resulting ML system will be able to replicate human judgement about particular situations, and thus have the same “fuzzy access to approximate rules” about values as humans. We also train the ML system to come up with proposed actions, so that it knows both how to perform a task and how to judge its performance. This approach works at least in simple cases, such as Atari games and simple robotics tasks and language-specified goals in gridworlds. The questions we ask change as the system learns to perform different types of actions, which is necessary as the model of what is better or worse will only be accurate if we have applicable data to generalize from.

The info is here.

Friday, March 29, 2019

Artificial Morality

Robert Koehler
www.commondreams.org
Originally posted March 14, 2019

Artificial Intelligence is one thing. Artificial morality is another. It may sound something like this:

“First, we believe in the strong defense of the United States and we want the people who defend it to have access to the nation’s best technology, including from Microsoft.”

The words are those of Microsoft president Brad Smith, writing on a corporate blogsite last fall in defense of the company’s new contract with the U.S. Army, worth $479 million, to make augmented reality headsets for use in combat. The headsets, known as the Integrated Visual Augmentation System, or IVAS, are a way to “increase lethality” when the military engages the enemy, according to a Defense Department official. Microsoft’s involvement in this program set off a wave of outrage among the company’s employees, with more than a hundred of them signing a letter to the company’s top executives demanding that the contract be canceled.

“We are a global coalition of Microsoft workers, and we refuse to create technology for warfare and oppression. We are alarmed that Microsoft is working to provide weapons technology to the U.S. Military, helping one country’s government ‘increase lethality’ using tools we built. We did not sign up to develop weapons, and we demand a say in how our work is used.”

The info is here.

The history and future of digital health in the field of behavioral medicine

Danielle Arigo, Danielle E. Jake-Schoffman, Kathleen Wolin, Ellen Beckjord, & Eric B. Hekler
J Behav Med (2019) 42: 67.
https://doi.org/10.1007/s10865-018-9966-z

Abstract

Since its earliest days, the field of behavioral medicine has leveraged technology to increase the reach and effectiveness of its interventions. Here, we highlight key areas of opportunity and recommend next steps to further advance intervention development, evaluation, and commercialization with a focus on three technologies: mobile applications (apps), social media, and wearable devices. Ultimately, we argue that future of digital health behavioral science research lies in finding ways to advance more robust academic-industry partnerships. These include academics consciously working towards preparing and training the work force of the twenty first century for digital health, actively working towards advancing methods that can balance the needs for efficiency in industry with the desire for rigor and reproducibility in academia, and the need to advance common practices and procedures that support more ethical practices for promoting healthy behavior.

Here is a portion of the Summary

An unknown landscape of privacy and data security

Another relatively new set of challenges centers around the issues of privacy and data security presented by digital health tools. First, some commercially available technologies that were originally produced for purposes other than promoting healthy behavior (e.g., social media) are now being used to study health behavior and deliver interventions. This poses a variety of potential privacy issues depending on the privacy settings used, including the fact that data from non-participants may inadvertently be viewed and collected, and their rights should also be considered as part of study procedures (Arigo et al., 2018).  Privacy may be of particular concern as apps begin to incorporate additional smartphone technologies such as GPS location tracking and cameras (Nebeker et al., 2015).  Second, for commercial products that were originally designed for health behavior change (e.g., apps), researchers need to carefully read and understand the associated privacy and security agreements, be sure that participants understand these agreements, and include a summary of this information in their applications to ethics review boards.

Thursday, March 28, 2019

An Empirical Evaluation of the Failure of the Strickland Standard to Ensure Adequate Counsel to Defendants with Mental Disabilities Facing the Death Penalty

Michael L. Perlin, Talia Roitberg Harmon, & Sarah Chatt
Social Science Research Network 
http://dx.doi.org/10.2139/ssrn.3332730

Abstract

Anyone who has been involved with death penalty litigation in the past four decades knows that one of the most scandalous aspects of that process—in many ways, the most scandalous—is the inadequacy of counsel so often provided to defendants facing execution. By now, virtually anyone with even a passing interest is well versed in the cases and stories about sleeping lawyers, missed deadlines, alcoholic and disoriented lawyers, and, more globally, lawyers who simply failed to vigorously defend their clients. This is not news.

And, in the same vein, anyone who has been so involved with this area of law and policy for the past 35 years knows that it is impossible to make sense of any of these developments without a deep understanding of the Supreme Court’s decision in Strickland v. Washington, 466 U.S. 668 (1984), the case that established a pallid, virtually-impossible-to fail test for adequacy of counsel in such litigation. Again, this is not news.

We also know that some of the most troubling results in Strickland interpretations have come in cases in which the defendant was mentally disabled—either by serious mental illness or by intellectual disability. Some of the decisions in these cases—rejecting Strickland-based appeals—have been shocking, making a mockery out of a constitutionally based standard.

To the best of our knowledge, no one has—prior to this article—undertaken an extensive empirical analysis of how one discrete US federal circuit court of appeals has dealt with a wide array of Strickland-claim cases in cases involving defendants with mental disabilities. We do this here. In this article, we reexamine these issues from the perspective of the 198 state cases decided in the Fifth Circuit from 1984 to 2017 involving death penalty verdicts in which, at some stage of the appellate process, a Strickland claim was made (in which there were only 13 cases in which any relief was even preliminarily granted under Strickland). As we demonstrate subsequently, Strickland is indeed a pallid standard, fostering “tolerance of abysmal lawyering,” and is one that makes a mockery of the most vital of constitutional law protections: the right to adequate counsel.

This article will proceed in this way. First, we discuss the background of the development of counsel adequacy in death penalty cases. Next, we look carefully at Strickland, and the subsequent Supreme Court cases that appear—on the surface—to bolster it in this context. We then consider multiple jurisprudential filters that we believe must be taken seriously if this area of the law is to be given any authentic meaning. Next, we will examine and interpret the data that we have developed, looking carefully at what happened after the Strickland-ordered remand in the 13 Strickland “victories.” Finally, we will look at this entire area of law through the filter of therapeutic jurisprudence, and then explain why and how the charade of adequacy of counsel law fails miserably to meet the standards of this important school of thought.

Behind the Scenes, Health Insurers Use Cash and Gifts to Sway Which Benefits Employers Choose

Marshall Allen
Propublica.org
Originally posted February 20, 2019

Here is an excerpt:

These industry payments can’t help but influence which plans brokers highlight for employers, said Eric Campbell, director of research at the University of Colorado Center for Bioethics and Humanities.

“It’s a classic conflict of interest,” Campbell said.

There’s “a large body of virtually irrefutable evidence,” Campbell said, that shows drug company payments to doctors influence the way they prescribe. “Denying this effect is like denying that gravity exists.” And there’s no reason, he said, to think brokers are any different.

Critics say the setup is akin to a single real estate agent representing both the buyer and seller in a home sale. A buyer would not expect the seller’s agent to negotiate the lowest price or highlight all the clauses and fine print that add unnecessary costs.

“If you want to draw a straight conclusion: It has been in the best interest of a broker, from a financial point of view, to keep that premium moving up,” said Jeffrey Hogan, a regional manager in Connecticut for a national insurance brokerage and one of a band of outliers in the industry pushing for changes in the way brokers are paid.

The info is here.

Wednesday, March 27, 2019

Language analysis reveals recent and unusual 'moral polarisation' in Anglophone world

Andrew Masterson
Cosmos Magazine
Originally published March 4, 2019

Here is an excerpt:

Words conveying moral values in more specific domains, however, did not always accord to a similar pattern – revealing, say the researchers, the changing prominence of differing sets of concerns surrounding concepts such as loyalty and betrayal, individualism, and notions of authority.

Remarkably, perhaps, the study is only the second in the academic literature that uses big data to examine shifts in moral values over time. The first, by psychologists Pelin and Selin Kesibir, and published in The Journal of Positive Psychology in 2012, used two approaches to track the frequency of morally-loaded words in a corpus of US books across the twentieth century.

The results revealed a “decline in the use of general moral terms”, and significant downturns in the use of words such as honesty, patience, and compassion.

Haslam and colleagues found that at headline level their results, using a larger dataset, reflected the earlier findings. However, fine-grain investigations revealed a more complex picture. Nevertheless, they say, the changes in the frequency of use for particular types of moral terms is sufficient to allow the twentieth century to be divided into five distinct historical periods.

The words used in the search were taken from lists collated under what is known as Moral Foundations Theory (MFT), a generally supported framework that rejects the idea that morality is monolithic. Instead, the researchers explain, MFT aims to “categorise the automatic and intuitive emotional reactions that commonly occur in moral evaluation across cultures, and [identifies] five psychological systems (or foundations): Harm, Fairness, Ingroup, Authority, and Purity.”

The info is here.

The Value Of Ethics And Trust In Business.. With Artificial Intelligence

Stephen Ibaraki
Forbes.com
Originally posted March 2, 2019

Here is an excerpt:

Increasingly contributing positively to society and driving positive change are a growing discourse around the world and hitting all sectors and disruptive technologies such as Artificial Intelligence (AI).

With more than $20 Trillion USD wealth transfer from baby boomers to millennials, and their focus on the environment and social impact, this trend will accelerate. Business is aware and and taking the lead in this movement of advancing the human condition in a responsible and ethical manner. Values-based leadership, diversity, inclusion, investment and long-term commitment are the multi-stakeholder commitments going forward.

“Over the last 12 years, we have repeatedly seen that those companies who focus on transparency and authenticity are rewarded with the trust of their employees, their customers and their investors. While negative headlines might grab attention, the companies who support the rule of law and operate with decency and fair play around the globe will always succeed in the long term,” explained Ethisphere CEO, Timothy Erblich. “Congratulations to all of the 2018 honorees.”

The info is here.

Tuesday, March 26, 2019

Does AI Ethics Have a Bad Name?

Calum Chace
Forbes.com
Originally posted March 7, 2019

Here is an excerpt:

Artificial intelligence is a technology, and a very powerful one, like nuclear fission.  It will become increasingly pervasive, like electricity. Some say that its arrival may even turn out to be as significant as the discovery of fire.  Like nuclear fission, electricity and fire, AI can have positive impacts and negative impacts, and given how powerful it is and it will become, it is vital that we figure out how to promote the positive outcomes and avoid the negative ones.

It's the bias that concerns people in the AI ethics community.  They want to minimise the amount of bias in the data which informs the AI systems that help us to make decisions – and ideally, to eliminate the bias altogether.  They want to ensure that tech giants and governments respect our privacy at the same time as they develop and deliver compelling products and services. They want the people who deploy AI to make their systems as transparent as possible so that in advance or in retrospect, we can check for sources of bias and other forms of harm.

But if AI is a technology like fire or electricity, why is the field called “AI ethics”?  We don’t have “fire ethics” or “electricity ethics,” so why should we have AI ethics?  There may be a terminological confusion here, and it could have negative consequences.

One possible downside is that people outside the field may get the impression that some sort of moral agency is being attributed to the AI, rather than to the humans who develop AI systems.  The AI we have today is narrow AI: superhuman in certain narrow domains, like playing chess and Go, but useless at anything else. It makes no more sense to attribute moral agency to these systems than it does to a car or a rock.  It will probably be many years before we create an AI which can reasonably be described as a moral agent.

The info is here.