Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Monday, May 14, 2018

Computer Says No: Part 2 - Explainability

Jasmine Leonard
theRSA.org
Originally posted March 23, 2018

Here is an expert:

The trouble is, since many decisions should be explainable, it’s tempting to assume that automated decision systems should also be explainable.  But as discussed earlier, automated decisions systems don’t actually make decisions; they make predictions.  And when a prediction is used to guide a decision, the prediction is itself part of the explanation for that decision.  It therefore doesn’t need to be explained itself, it merely needs to be justifiable.

This is a subtle but important distinction.  To illustrate it, imagine you were to ask your doctor to explain her decision to prescribe you a particular drug.  She could do so by saying that the drug had cured many other people with similar conditions in the past and that she therefore predicted it would cure you too.  In this case, her prediction that the drug will cure you is the explanation for her decision to prescribe it.  And it’s a good explanation because her prediction is justified – not on the basis of an explanation of how the drug works, but on the basis that it’s proven to be effective in previous cases.  Indeed, explanations of how drugs work are often not available because the biological mechanisms by which they operate are poorly understood, even by those who produce them.  Moreover, even if your doctor could explain how the drug works, unless you have considerable knowledge of pharmacology, it’s unlikely that the explanation would actually increase your understanding of her decision to prescribe the drug.

If explanations of predictions are unnecessary to justify their use in decision-making then, what else can justify the use of a prediction made by an automated decision system?  The best answer, I believe, is that the system is shown to be sufficiently accurate.  What “sufficiently accurate” means is obviously up for debate, but at a minimum I would suggest that it means the system’s predictions are at least as accurate as those produced by a trained human.  It also means that there are no other readily available systems that produce more accurate predictions.

The article is here.

No Luck for Moral Luck

Markus Kneer, University of Zurich Edouard Machery, University of Pittsburgh
Draft, March 2018

Abstract

Moral philosophers and psychologists often assume that people judge morally lucky and morally unlucky agents differently, an assumption that stands at the heart of the puzzle of moral luck. We examine whether the asymmetry is found for reflective intuitions regarding wrongness, blame, permissibility and punishment judgments, whether people's concrete, case-based judgments align with their explicit, abstract principles regarding moral luck, and what psychological mechanisms might drive the effect. Our experiments  produce three findings: First, in within-subjects experiments favorable to reflective deliberation, wrongness, blame, and permissibility judgments across different moral luck conditions are the same for the vast majority of people. The philosophical puzzle of moral luck, and the challenge to the very possibility of systematic ethics it is frequently taken to engender, thus simply does not arise. Second, punishment judgments are significantly more outcome-dependent than wrongness, blame, and permissibility  judgments. While this is evidence in favor of current dual-process theories of moral  judgment, the latter need to be qualified since punishment does not pattern with blame. Third, in between-subjects experiments, outcome has an effect on all four types of moral  judgments. This effect is mediated by negligence ascriptions and can ultimately be explained as due to differing probability ascriptions across cases.

The manuscript is here.

Sunday, May 13, 2018

Facebook Uses AI To Predict Your Future Actions for Advertizers

Sam Biddle
The Intercept
Originally posted April 13, 2018

Here is an excerpt:

Asked by Fortune’s Stacey Higginbotham where Facebook hoped its machine learning work would take it in five years, Chief Technology Officer Mike Schroepfer said in 2016 his goal was that AI “makes every moment you spend on the content and the people you want to spend it with.” Using this technology for advertising was left unmentioned. A 2017 TechCrunch article declared, “Machine intelligence is the future of monetization for Facebook,” but quoted Facebook executives in only the mushiest ways: “We want to understand whether you’re interested in a certain thing generally or always. Certain things people do cyclically or weekly or at a specific time, and it’s helpful to know how this ebbs and flows,” said Mark Rabkin, Facebook’s vice president of engineering for ads. The company was also vague about the melding of machine learning to ads in a 2017 Wired article about the company’s AI efforts, which alluded to efforts “to show more relevant ads” using machine learning and anticipate what ads consumers are most likely to click on, a well-established use of artificial intelligence. Most recently, during his congressional testimony, Zuckerberg touted artificial intelligence as a tool for curbing hate speech and terrorism.

The article is here.

Saturday, May 12, 2018

Bystander risk, social value, and ethics of human research

S. K. Shah, J. Kimmelman, A. D. Lyerly, H. F. Lynch, and others
Science 13 Apr 2018 : 158-159

Two critical, recurring questions can arise in many areas of research with human subjects but are poorly addressed in much existing research regulation and ethics oversight: How should research risks to “bystanders” be addressed? And how should research be evaluated when risks are substantial but not offset by direct benefit to participants, and the benefit to society (“social value”) is context-dependent? We encountered these issues while serving on a multidisciplinary, independent expert panel charged with addressing whether human challenge trials (HCTs) in which healthy volunteers would be deliberately infected with Zika virus could be ethically justified (1). Based on our experience on that panel, which concluded that there was insufficient value to justify a Zika HCT at the time of our report, we propose a new review mechanism to preemptively address issues of bystander risk and contingent social value.

(cut)

Some may object that generalizing and institutionalizing this approach could slow valuable research by adding an additional layer for review. However, embedding this process within funding agencies could preempt ethical problems that might otherwise stymie research. Concerns that CERCs might suffer from “mission creep” could be countered by establishing clear charters and triggers for deploying CERCs. Unlike IRBs, their opinions should be publicly available to provide precedent for future research programs or for IRBs evaluating particular protocols at a later date.

The information is here.

Friday, May 11, 2018

AI experts want government algorithms to be studied like environmental hazards

Dave Gershgorn
Quartz (www.qz.com)
Originally published April 9, 2018

Artificial intelligence experts are urging governments to require assessments of AI implementation that mimic the environmental impact reports now required by many jurisdictions.

AI Now, a nonprofit founded to study the societal impacts of AI, said an algorithmic impact assessment (AIA) would assure that the public and governments understand the scope, capability, and secondary impacts an algorithm could have, and people could voice concerns if an algorithm was behaving in a biased or unfair way.

“If governments deploy systems on human populations without frameworks for accountability, they risk losing touch with how decisions have been made, thus rendering them unable to know or respond to bias, errors, or other problems,” the report said. “The public will have less insight into how agencies function, and have less power to question or appeal decisions.”

The information is here.

Samantha’s suffering: why sex machines should have rights too

Victoria Brooks
The Conversation
Originally posted April 5, 2018

Here is the conclusion:

Machines are indeed what we make them. This means we have an opportunity to avoid assumptions and prejudices brought about by the way we project human feelings and desires. But does this ethically entail that robots should be able to consent to or refuse sex, as human beings would?

The innovative philosophers and scientists Frank and Nyholm have found many legal reasons for answering both yes and no (a robot’s lack of human consciousness and legal personhood, and the “harm” principle, for example). Again, we find ourselves seeking to apply a very human law. But feelings of suffering outside of relationships, or identities accepted as the “norm”, are often illegitimised by law.

So a “legal” framework which has its origins in heteronormative desire does not necessarily construct the foundation of consent and sexual rights for robots. Rather, as the renowned post-human thinker Rosi Braidotti argues, we need an ethic, as opposed to a law, which helps us find a practical and sensitive way of deciding, taking into account emergences from cross-species relations. The kindness and empathy we feel toward Samantha may be a good place to begin.

The article is here.

Thursday, May 10, 2018

A Two-Factor Model of Ethical Culture

Caterina Bulgarella
ethicalsystems.org

Making Progress in the Field of Business Ethics

Over the past 15 years, behavioral science has provided practitioners with a uniquely insightful
perspective on the organizational elements companies need to focus on to build an ethical culture.
Pieced together, this research can be used to address the growing challenges business must tackle
today.

Faced with unprecedented complexity and rapid change, more and more organizations are feeling the
limitations of an old-fashioned approach to ethics. In this new landscape, the importance of a proactive ethical stance has become increasingly clear. Not only is a strong focus on business integrity likely to reduce the costs of misconduct, but it can afford companies a solid corporate reputation, genuine employee compliance, robust governance, and even increased profitability.

The need for a smarter, deeper, and more holistic approach to ethical conduct is also strengthened by
the inherent complexity of human behavior. As research continues to shed light on the factors that
undermine people’s ability to ‘do the right thing,’ we are reminded of how difficult it is to solve for
ethics without addressing the larger challenge of organizational culture.

The components that shape the culture of an organization exercise a constant and unrelenting influence on how employees process information, make decisions, and, ultimately, respond to ethical dilemmas.  This is why, in order to help business achieve a deeper and more systematic ethical focus, we must understand the ingredients that make up an ethical culture.

The information is here.

The WEIRD Science of Culture, Values, and Behavior

Kim Armstrong
Psychological Science
Originally posted April 2018

Here is an excerpt:

While the dominant norms of a society may shape our behavior, children first experience the influence of those cultural values through the attitudes and beliefs of their parents, which can significantly impact their psychological development, said Heidi Keller, a professor of psychology at the University of Osnabrueck, Germany.

Until recently, research within the field of psychology focused mainly on WEIRD (Western, educated, industrialized, rich, and democratic) populations, Keller said, limiting the understanding of the influence of culture on childhood development.

“The WEIRD group represents maximally 5% of the world’s population, but probably more than 90% of the researchers and scientists producing the knowledge that is represented in our textbooks work with participants from that particular context,” Keller explained.

Keller and colleagues’ research on the ecocultural model of development, which accounts for the interaction of socioeconomic and cultural factors throughout a child’s upbringing, explores this gap in the research by comparing the caretaking styles of rural and urban families throughout India, Cameroon, and Germany. The experiences of these groups can differ significantly from the WEIRD context, Keller notes, with rural farmers — who make up 30% to 40% of the world’s population — tending to live in extended family households while having more children at a younger age after an average of just 7 years of education.

The information is here.

Wednesday, May 9, 2018

How To Deliver Moral Leadership To Employees

John Baldoni
Forbes.com
Originally posted April 12, 2018

Here is an excerpt:

When it comes to moral authority there is a disconnect between what is expected and what is delivered. So what can managers do to fulfill their employees' expectations?

First, let’s cover what not to do – preach! Employees don’t want words; they want actions. They also do not expect to have to follow a particular religious creed at work. Just as with the separation of church and state, there is an implied separation in the workplace, especially now with employees of many different (or no) faiths. (There are exceptions within privately held, family-run businesses.)

LRN advocates doing two things: pause to reflect on the situation as a means of connecting with values and second act with humility. The former may be easier than the latter, but it is only with humility that leaders connect more realistically with others. If you act your title, you set up barriers to understanding. If you act as a leader, you open the door to greater understanding.

Dov Seidman, CEO of LRN, advises leaders to instill purpose, elevate and inspire individuals and live your values. Very importantly in this report, Seidman challenges leaders to embrace moral challenges as he says, by “constant wrestling with the questions of right and wrong, fairness and justice, and with ethical dilemmas.”

The information is here.