Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Friday, June 30, 2017

Ethics and Artificial Intelligence With IBM Watson's Rob High

Blake Morgan
Forbes.com
Originally posted June 12, 2017

Artificial intelligence seems to be popping up everywhere, and it has the potential to change nearly everything we know about data and the customer experience. However, it also brings up new issues regarding ethics and privacy.

One of the keys to keeping AI ethical is for it to be transparent, says Rob High, vice president and chief technology officer of IBM Watson. When customers interact with a chatbot, for example, they need to know they are communicating with a machine and not an actual human. AI, like most other technology tools, is most effective when it is used to extend the natural capabilities of humans instead of replacing them. That means that AI and humans are best when they work together and can trust each other.

Chatbots are one of the most commonly used forms of AI. Although they can be used successfully in many ways, there is still a lot of room for growth. As they currently stand, chatbots mostly perform basic actions like turning on lights, providing directions, and answering simple questions that a person asks directly. However, in the future, chatbots should and will be able to go deeper to find the root of the problem. For example, a person asking a chatbot what her bank balance is might be asking the question because she wants to invest money or make a big purchase—a futuristic chatbot could find the real reason she is asking and turn it into a more developed conversation. In order to do that, chatbots will need to ask more questions and drill deeper, and humans need to feel comfortable providing their information to machines.

The article is here.

Ethical Interventions Means Giving Consumers A Say

Susan Liautaud
Wired Magazine
Originally published June 12, 2017

Here is an excerpt:

Increasingly, the people and companies with the technological or scientific ability to create new products or innovations are de facto making policy decisions that affect human safety and society. But these decisions are often based on the creator’s intent for the product, and they don't always take into account its potential risks and unforeseen uses. What if gene-editing is diverted for terrorist ends? What if human-pig chimeras mate? What if citizens prefer to see birds rather than flying cars when they look out a window? (Apparently, this is a real risk. Uber plans to offer flight-hailing apps by 2020.) What if Echo Look leads to mental health issues for teenagers? Who bears responsibility for the consequences?

Jennifer Doudna and Emmanuelle Charpentier’s landmark 2014 article in Science, “The new frontier of genome engineering with CRISPR-Cas9,” called for a broader discussion among “scientists and society at large” about the technology's responsible use. Other leading scientists have joined the call for caution before the technique is intentionally used to alter the human germ line. The National Academies of Science, Engineering, and Medicine recently issued a report recommending that the ethical framework applied to gene therapy also be used when considering Crispr applications. In effect, the experts ask whether their scientific brilliance should legitimize them as decision-makers for all of us.

Crispr might prevent Huntington’s disease and cure cancer. But should errors occur, it’s hard to predict the outcome or prevent its benign use (by thoughtful and competent people) or misuse (by ill-intentioned actors).

Who should decide how Crispr should be used: Scientists? Regulators? Something in between, such as an academic institution, medical research establishment, or professional/industry association? The public? Which public, given the global impact of the decisions? Are ordinary citizens equipped to make such technologically complex ethical decisions? Who will inform the decision-makers about possible risks and benefits?

The article is here.

Thursday, June 29, 2017

Can a computer administer a Wechsler Intelligence Test?

Vrana, Scott R.; Vrana, Dylan T.
Professional Psychology: Research and Practice, Vol 48(3), Jun 2017, 191-198.

Abstract

Prompted by the rapid development of Pearson’s iPad-based Q-interactive platform for administering individual tests of cognitive ability (Pearson, 2016c), this article speculates about what it would take for a computer to administer the current versions of the Wechsler individual intelligence tests without the involvement of a psychologist or psychometrist. We consider the mechanics of administering and scoring each subtest and the more general clinical skills of motivating the client to perform, making observations of verbal and nonverbal behavior, and responding to the client’s off-task comments, questions, and nonverbal cues. It is concluded that we are very close to the point, given current hardware and artificial intelligence capabilities, at which administration of all subtests of the Wechsler Adult Intelligence Scale-Fourth Edition (PsychCorp, 2008) and Wechsler Intelligence Scale for Children-Fifth Edition (PsychCorp, 2014), and all assessment functions of the human examiner, could be performed by a computer. Potential acceptability of computer administration by clients and the psychological community are considered.

The article is here.

When is a leak ethical?

Cassandra Burke Robertson
The Conversation
Originally published June 12, 2017

Here is an excerpt:

Undoubtedly, leaking classified information violates the law. For some individuals, such as lawyers, leaking unclassified but still confidential information may also violate the rules of professional conduct.

But when is it ethical to leak?

Public interest disclosures

I am a scholar of legal ethics who has studied ethical decision-making in the political sphere.

Research has found that people are willing to blow the whistle when they believe that their organization has engaged in “corrupt and illegal conduct.” They may also speak up to prevent larger threats to cherished values, such as democracy and the rule of law. Law professor Kathleen Clark uses the phrase “public interest disclosures” to refer to such leaks.

Scholars who study leaking suggest that it can indeed be ethical to leak when the public benefit of the information is strong enough to outweigh the obligation to keep it secret.

The article is here.

Wednesday, June 28, 2017

How Milton Bradley’s morality play shaped the modern board game

An interview with  Tristan Donovan by Christopher Klein
The Boston Globe
Originally published May 26, 2017

Here is an excerpt:

Donovan: By 1860, America had the start of the board game industry, but it wasn’t big. Production was done mostly by hand, since there weren’t big printing presses. An added complication at the time was that America was a much more puritanical society, and game-playing of any kind was seen by many as sinful and a waste of time.

Milton Bradley himself was fairly devout. When he set out to make a board game, he was worried his friends would frown upon it, so he wanted to make a game that would teach morality. The basic idea of The Checkered Game of Life was to amass points and in the end reach “Happy Old Age.” You could accumulate points by landing on squares for virtues such as honor and happiness, and there were squares to avoid such as gambling and idleness. It’s steering players to the righteous path.

Ideas: That morality also complicated game play.

Donovan: Dice were considered evil and associated with gambling by many, so instead he used a teetotum, which had a series of numbers printed on it that you spun like a top.

Ideas: George Parker, on the other hand, built his name on rejecting a lot of those conventions.

Donovan: All the games that were available to Parker growing up were largely morality tales like The Checkered Game of Life. He was fed up with it. He wanted to play a game and didn’t want it to be a Sunday sermon every time. His first game, Banking, was basically about amassing money through speculation. The goal was to be the richest, rather than the first to achieve a happy old age. Parker created games that were about fun and making money, which found appeal as Gilded Age America transitioned from a Puritanical society to one about making money and doing well in a career.

The interview is here.

A Teachable Ethics Scandal

Mitchell Handelsman
Teaching of Psychology

Abstract

In this article, I describe a recent scandal involving collusion between officials at the American Psychological Association (APA) and the U.S. Department of Defense, which appears to have enabled the torture of detainees at the Guantanamo Bay detention facility. The scandal is a relevant, complex, and engaging case that teachers can use in a variety of courses. Details of the scandal exemplify a number of psychological concepts, including obedience, groupthink, terror management theory, group influence, and motivation. The scandal can help students understand several factors that make ethical decision-making difficult, including stress, emotions, and cognitive factors such as loss aversion, anchoring, framing, and ethical fading. I conclude by exploring some parallels between the current torture scandal and the development of APA’s ethics guidelines regarding the use of deception in research.

The article is here.

Tuesday, June 27, 2017

Resisting Temptation for the Good of the Group: Binding Moral Values and the Moralization of Self-Control

Mooijman, Marlon; Meindl, Peter; Oyserman, Daphna; Monterosso, John; Dehghani, Morteza; Doris, John M.; Graham, Jesse
Journal of Personality and Social Psychology, Jun 12 , 2017.

Abstract

When do people see self-control as a moral issue? We hypothesize that the group-focused “binding” moral values of Loyalty/betrayal, Authority/subversion, and Purity/degradation play a particularly important role in this moralization process. Nine studies provide support for this prediction. First, moralization of self-control goals (e.g., losing weight, saving money) is more strongly associated with endorsing binding moral values than with endorsing individualizing moral values (Care/harm, Fairness/cheating). Second, binding moral values mediate the effect of other group-focused predictors of self-control moralization, including conservatism, religiosity, and collectivism. Third, guiding participants to consider morality as centrally about binding moral values increases moralization of self-control more than guiding participants to consider morality as centrally about individualizing moral values. Fourth, we replicate our core finding that moralization of self-control is associated with binding moral values across studies differing in measures and design—whether we measure the relationship between moral and self-control language across time, the perceived moral relevance of self-control behaviors, or the moral condemnation of self-control failures. Taken together, our findings suggest that self-control moralization is primarily group-oriented and is sensitive to group-oriented cues.

The article is here.

No Pain, All Gain: The Case for Farming Organs in Brainless Humans

Ruth Stirton and David Lawrence
BMJ Blogs
Originally posted June 10, 2017

Here is an excerpt:

A significant challenge to this practice is that it is probably unethical to use an animal in this way for the benefit of humans. Pigs in particular have a relatively high level of sentience and consciousness, which should not be dismissed lightly.  Some would argue that animals with certain levels of sentience and consciousness – perhaps those capable of understanding what is happening to them – have moral worth and are entitled to respect and protection, and to be treated with dignity.  It is inappropriate to simply use them for the benefit of humanity.  Arguably, the level of protection ought to correlate to the level of understanding (or personhood), and thus the pig deserves a greater level of protection than the sea cucumber.  The problem here is that the sea cucumber is not sufficiently similar to the human to be of use to us when we’re thinking about organs for transplantation purposes.  The useful animals are those closest to us, which are by definition those animals with more complex brains and neural networks, and which consequently attract higher moral value.

The moral objection to using animals in this way arises because of their levels of cognition.  This moral objection would disappear if we could prevent the animals ever developing the capacity for consciousness: they would never become entities capable of being harmed.  If we were able to genetically engineer a brainless pig, leaving only the minimal neural circuits necessary to maintain heart and lung function,  it could act as organic vessel for growing organs for transplantation.  The objection based on the use of a conscious animal disappears, since this entity – it’s not clear the extent to which is it possible to call it an animal – would have no consciousness.

The blog post is here.

Monday, June 26, 2017

What’s the Point of Professional Ethical Codes?

Iain Brassington
BMJ Blogs
June 13, 2017

Here is an excerpt:

They can’t be meant as a particularly useful tool for solving deep moral dilemmas: they’re much too blunt for that, often presuppose too much, and tend to bend to suit the law.  To think that because the relevant professional code enjoins x it follows that x is permissible or right smacks of a simple appeal to authority, and this flies in the face of what it is to be a moral agent in the first place.  But what a professional code of ethics may do is to provide a certain kind of Bolamesque legal defence: if your having done φ attracts a claim that it’s negligent or unreasonable or something like that, being able to point out that your professional body endorses φ-ing will help you out.  But professional ethics, and what counts as professional discipline, stretches way beyond that.  For example, instances of workplace bullying can be matters of great professional and ethical import, but it’s not at all obvious that the law should be involved.

There’s a range of reasons why someone’s behaviour might be of professional ethical concern.  Perhaps the most obvious is a concern for public protection.  If someone has been found to have behaved in a way that endangers third parties, then the profession may well want to intervene.

The blog post is here.