Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Saturday, August 3, 2019

When Do Robots Have Free Will? Exploring the Relationships between (Attributions of) Consciousness and Free Will

Eddy Nahmias, Corey Allen, & Bradley Loveall
Georgia State University

From the Conclusion:

If future research bolsters our initial findings, then it would appear that when people consider whether agents are free and responsible, they are considering whether the agents have capacities to feel emotions more than whether they have conscious sensations or even capacities to deliberate or reason. It’s difficult to know whether people assume that phenomenal consciousness is required for or enhances capacities to deliberate and reason. And of course, we do not deny that cognitive capacities for self-reflection, imagination, and reasoning are crucial for free and responsible agency (see, e.g., Nahmias 2018). For instance, once considering agents that are assumed to have phenomenal consciousness, such as humans, it is likely that people’s attributions of free will and responsibility decrease in response to information that an agent has severely diminished reasoning capacities. But people seem to have intuitions that support the idea that an essential condition for free will is the capacity to experience conscious emotions.  And we find it plausible that these intuitions indicate that people take it to be essential to being a free agent that one can feel the emotions involved in reactive attitudes and in genuinely caring about one’s choices and their outcomes.

(cut)

Perhaps, fiction points us towards the truth here. In most fictional portrayals of artificial intelligence and robots (such as Blade Runner, A.I., and Westworld), viewers tend to think of the robots differently when they are portrayed in a way that suggests they express and feel emotions.  No matter how intelligent or complex their behavior, the robots do not come across as free and autonomous until they seem to care about what happens to them (and perhaps others). Often this is portrayed by their showing fear of their own or others’ deaths, or expressing love, anger, or joy. Sometimes it is portrayed by the robots’ expressing reactive attitudes, such as indignation about how humans treat them, or our feeling such attitudes towards them, for instance when they harm humans.

The research paper is here.

Friday, August 2, 2019

Therapist accused of sending client photos of herself in lingerie can’t get her state license back: Pa. court

Matt Miller
www.pennlive.com
Originally posted July 17, 2019

A therapist who was accused of sending a patient photos of herself in lingerie can’t have her state counseling license back, a Commonwealth Court panel ruled Wednesday.

That is so even though Sheri Colston denied sending those photos or having any inappropriate interactions with the male client, the court found in an opinion by Judge Robert Simpson.

The court ruling upholds an indefinite suspension of Colston’s license imposed by the State Board of Social Workers, Marriage and Family Therapists and Professional Counselors. That board also ordered Colston to pay $7,409 to cover the cost of investigating her case.

The info is here.

The Gordian Knot of Disposition Theory: Character Morality and Liking

Matthew Grizzard, Jialing Huang, Changhyun Ahn, and others
Journal of Media Psychology.
https://doi.org/10.1027/1864-1105/a000257

Abstract

Morally ambiguous characters are often perceived to challenge Zillmann’s affective disposition theory of drama. At the heart of this challenge is the question: “To what extent can liking be independent of character morality?” The current study examines this question with a 2 (Disposition: Positive vs. Negative) × 3 (Character Type: Hero, Antihero, Villain) between-subjects factorial experiment that induces variance in liking and morality. We assess the influence of these orthogonal manipulations on measured liking and morality. Main effects of both manipulations on the measured variables emerged, with a significant correlation between measures. Regression analyses further confirm that liking is associated with perceived morality and vice versa. Because variance in morality was induced by the liking manipulation and variance in liking was induced by the morality manipulation, the assumptions of disposition theory regarding morality and liking seem accurate. Future research directions are provided that may help reconcile and integrate the seeming challenge of morally ambiguous characters with affective disposition theory.

The research is here.

Here is a link to a separate paper on Moral Pornography as it applies to comic book heroes and villains.

Thursday, August 1, 2019

Google Contractors Listen to Recordings of People Using Virtual Assistant

Sarah E. Needleman and Parmy Olson
The Wall Street Journal
Originally posted July 11, 2019

Here are two excerpts:

In a blog post Thursday, Google confirmed it employs people world-wide to listen to a small sample of recordings.

The public broadcaster’s report said the recordings potentially expose sensitive information about users such as names and addresses.

It also said Google, in some cases, is recording voices of customers even when they aren’t using Google Assistant [emphasis added].

In its blog post, Google said language experts listen to 0.2% of “audio snippets” taken from the Google Assistant to better understand different languages, accents and dialects.

(cut)

It is common practice for makers of virtual assistants to record and listen to some of what their users say so they can improve on the technology, said Bret Kinsella, chief executive of Voicebot.ai, a research firm focused on voice technology and artificial intelligence.

“Anything with speech recognition, you generally have humans at one point listening and annotating to sort out what types of errors are occurring,” he said.

In May, however, a coalition of privacy and child-advocacy groups filed a complaint with federal regulators about Amazon potentially preserving conversations of young users through its Echo Dot Kids devices.

The info is here.

Ethics in the Age of Artificial Intelligence

Shohini Kundu
Scientific American
Originally published July 3, 2019

Here is an except:

Unfortunately, in that decision-making process, AI also took away the transparency, explainability, predictability, teachability and auditability of the human move, replacing it with opacity. The logic for the move is not only unknown to the players, but also unknown to the creators of the program. As AI makes decisions for us, transparency and predictability of decision-making may become a thing of the past.

Imagine a situation in which your child comes home to you and asks for an allowance to go see a movie with her friends. You oblige. A week later, your other child comes to you with the same request, but this time, you decline. This will immediately raise the issue of unfairness and favoritism. To avoid any accusation of favoritism, you explain to your child that she must finish her homework before qualifying for any pocket money.

Without any explanation, there is bound to be tension in the family. Now imagine replacing your role with an AI system that has gathered data from thousands of families in similar situations. By studying the consequence of allowance decisions on other families, it comes to the conclusion that one sibling should get the pocket money while the other sibling should not.

But the AI system cannot really explain the reasoning—other than to say that it weighed your child’s hair color, height, weight and all other attributes that it has access to in arriving at a decision that seems to work best for other families. How is that going to work?

The info is here.

Wednesday, July 31, 2019

US Senators Call for International Guidelines for Germline Editing

Jef Akst
www.the-scientist.com
Originally published July 16, 2019

Here is an excerpt:

“Gene editing is a powerful technology that has the potential to lead to new therapies for devastating and previously untreatable diseases,” Feinstein says in a statement. “However, like any new technology, there is potential for misuse. The international community must establish standards for gene-editing research to develop global ethical principles and prevent unethical researchers from moving to whichever country has the loosest regulations.” (Editing embryos for reproductive purposes is already illegal in the US.)

In addition, the resolution makes clear that the trio of senators “opposes the experiments that resulted in pregnancies using genome-edited human embryos”—referring to the revelation last fall that researcher He Jiankui had CRISPRed the genomes of two babies born in China.

The info is here.

The “Fake News” Effect: An Experiment on Motivated Reasoning and Trust in News

Michael Thaler
Harvard University
Originally published May 28, 2019

Abstract

When people receive information about controversial issues such as immigration policies, upward mobility, and racial discrimination, the information often evokes both what they currently believe and what they are motivated to believe. This paper theoretically and experimentally explores the importance in inference of this latter channel: motivated reasoning. In the theory of motivated reasoning this paper develops, people misupdate from information by treating their motivated beliefs as an extra signal. To test the theory, I create a new experimental design in which people make inferences about the veracity of news sources. This design is unique in that it identifies motivated reasoning from Bayesian updating and confirmation bias, and doesn’t require elicitation of people’s entire belief distribution. It is also very portable: In a large online experiment, I find the first identifying evidence for politically-driven motivated reasoning on eight different economic and social issues. Motivated reasoning leads people to become more polarized, less accurate, and more overconfident in their beliefs about these issues.

From the Conclusion:

One interpretation of this paper is unambiguously bleak: People of all demographics similarly motivatedly reason, do so on essentially every topic they are asked about, and make particularly biased inferences on issues they find important. However, there is an alternative interpretation: This experiment takes a step towards better understanding motivated reasoning, and makes it easier for future work to attenuate the bias. Using this experimental design, we can identify and estimate the magnitude of the bias; future projects that use interventions to attempt to mitigate motivated reasoning can use this estimated magnitude as an outcome variable. Since the bias does decrease utility in at least some settings, people may have demand for such interventions.

The research is here.

Tuesday, July 30, 2019

Is belief superiority justified by superior knowledge?

Michael P.Hall & Kaitlin T.Raimi
Journal of Experimental Social Psychology
Volume 76, May 2018, Pages 290-306

Abstract

Individuals expressing belief superiority—the belief that one's views are superior to other viewpoints—perceive themselves as better informed about that topic, but no research has verified whether this perception is justified. The present research examined whether people expressing belief superiority on four political issues demonstrated superior knowledge or superior knowledge-seeking behavior. Despite perceiving themselves as more knowledgeable, knowledge assessments revealed that the belief superior exhibited the greatest gaps between their perceived and actual knowledge. When given the opportunity to pursue additional information in that domain, belief-superior individuals frequently favored agreeable over disagreeable information, but also indicated awareness of this bias. Lastly, experimentally manipulated feedback about one's knowledge had some success in affecting belief superiority and resulting information-seeking behavior. Specifically, when belief superiority is lowered, people attend to information they may have previously regarded as inferior. Implications of unjustified belief superiority and biased information pursuit for political discourse are discussed.

The research is here.

Ethics In The Digital Age: Protect Others' Data As You Would Your Own

uncaptionedJeff Thomson
Forbes.com
Originally posted July 1, 2019

Here is an excerpt:

2. Ensure they are using people’s data with their consent. 

In theory, an increasing amount of rights to data use is willingly signed over by people through digital acceptance of privacy policies. But a recent investigation by the European Commission, following up on the impact of GDPR, indicated that corporate privacy policies remain too difficult for consumers to understand or even read. When analyzing the ethics of using data, finance professionals must personally reflect on whether the way information is being used is consistent with how consumers, clients or employees understand and expect it to be used. Furthermore, they should question if data is being used in a way that is necessary for achieving business goals in an ethical manner.

3. Follow the “golden rule” when it comes to data. 

Finally, finance professionals must reflect on whether they would want their own personal information being used to further business goals in the way that they are helping their organization use the data of others. This goes beyond regulations and the fine print of privacy agreements: it is adherence to the ancient, universal standard of refusing to do to other people what you would not want done to yourself. Admittedly, this is subjective and difficult to define. But finance professionals will be confronted with many situations in which there are no clear answers, and they must have the ability to think about the ethical implications of actions that might not necessarily be illegal.

The info is here.