Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Reasoning. Show all posts
Showing posts with label Reasoning. Show all posts

Tuesday, December 29, 2015

AI is different because it lets machines weld the emotional with the physical

By Peter McOwen
The Conversation
Originally published December 10, 2015

Here is an excerpt:

Creative intelligence

However, many are sensitive to the idea of artificial intelligence being artistic – entering the sphere of human intelligence and creativity. AI can learn to mimic the artistic process of painting, literature, poetry and music, but it does so by learning the rules, often from access to large datasets of existing work from which it extracts patterns and applies them. Robots may be able to paint – applying a brush to canvas, deciding on shapes and colours – but based on processing the example of human experts. Is this creating, or copying? (The same question has been asked of humans too.)

The entire article is here.

Sunday, November 8, 2015

Deconstructing the seductive allure of neuroscience explanations

Weisberg DS, Keil FC, Goodstein J, Rawson E, Gray JR.
Judgment and Decision Making, Vol. 10, No. 5, 
September 2015, pp. 429–441

Abstract

Explanations of psychological phenomena seem to generate more public interest when they contain neuroscientific information. Even irrelevant neuroscience information in an explanation of a psychological phenomenon may interfere with people's abilities to critically consider the underlying logic of this explanation. We tested this hypothesis by giving naïve adults, students in a neuroscience course, and neuroscience experts brief descriptions of psychological phenomena followed by one of four types of explanation, according to a 2 (good explanation vs. bad explanation) x 2 (without neuroscience vs. with neuroscience) design. Crucially, the neuroscience information was irrelevant to the logic of the explanation, as confirmed by the expert subjects. Subjects in all three groups judged good explanations as more satisfying than bad ones. But subjects in the two nonexpert groups additionally judged that explanations with logically irrelevant neuroscience information were more satisfying than explanations without. The neuroscience information had a particularly striking effect on nonexperts' judgments of bad explanations, masking otherwise salient problems in these explanations.

The entire article is here.

Friday, October 2, 2015

What Is Quantum Cognition, and How Is It Applied to Psychology?

By Jerome Busemeyer and Zheng Wang
Current Directions in Psychological Science 
June 2015 vol. 24 no. 3 163-169

Abstract

Quantum cognition is a new research program that uses mathematical principles from quantum theory as a framework to explain human cognition, including judgment and decision making, concepts, reasoning, memory, and perception. This research is not concerned with whether the brain is a quantum computer. Instead, it uses quantum theory as a fresh conceptual framework and a coherent set of formal tools for explaining puzzling empirical findings in psychology. In this introduction, we focus on two quantum principles as examples to show why quantum cognition is an appealing new theoretical direction for psychology: complementarity, which suggests that some psychological measures have to be made sequentially and that the context generated by the first measure can influence responses to the next one, producing measurement order effects, and superposition, which suggests that some psychological states cannot be defined with respect to definite values but, instead, that all possible values within the superposition have some potential for being expressed. We present evidence showing how these two principles work together to provide a coherent explanation for many divergent and puzzling phenomena in psychology.

The entire article is here.

Friday, September 25, 2015

The Effect of Probability Anchors on Moral Decision Making

By Chris Brand and Mike Oaksford

Abstract

The role of probabilistic reasoning in moral decision making has seen relatively little research, despite having potentially profound consequences for our models of moral cognition. To rectify this, two experiments were undertaken in which participants were presented with moral dilemmas with additional information designed to anchor judgements about how likely the dilemma’s outcomes were. It was found that these anchoring values significantly altered how permissible the dilemmas were found when they were presented both explicitly and implicitly. This was the case even for dilemmas typically seen as eliciting deontological judgements.  Implications of this finding for cognitive models of moral decision making are discussed.

The entire research is here.

Thursday, May 21, 2015

Philosophers’ Biased Judgments Persist Despite Training, Expertise and Reflection

By Eric Schwitzgebel and Fiery Cushman
In press

Abstract

We examined the effects of framing and order of presentation on professional philosophers’
judgments about a moral puzzle case (the “trolley problem”) and a version of the Tversky &
Kahneman “Asian disease” scenario. Professional philosophers exhibited substantial framing
effects and order effects, and were no less subject to such effects than was a comparison group of
non-philosopher academic participants. Framing and order effects were not reduced by a forced
delay during which participants were encouraged to consider “different variants of the scenario
or different ways of describing the case”. Nor were framing and order effects lower among
participants reporting familiarity with the trolley problem or with loss-aversion framing effects,
nor among those reporting having had a stable opinion on the issues before participating the
experiment, nor among those reporting expertise on the very issues in question. Thus, for these
scenario types, neither framing effects nor order effects appear to be reduced even by high levels
of academic expertise.

The entire article is here.

Sunday, May 10, 2015

How Does Reasoning (Fail to) Contribute to Moral Judgment? Dumbfounding and Disengagement

Frank Hindriks
Ethical Theory and Moral Practice
April 2015, Volume 18, Issue 2, pp 237-250

Abstract

Recent experiments in moral psychology have been taken to imply that moral reasoning only serves to reaffirm prior moral intuitions. More specifically, Jonathan Haidt concludes from his moral dumbfounding experiments, in which people condemn other people’s behavior, that moral reasoning is biased and ineffective, as it rarely makes people change their mind. I present complementary evidence pertaining to self-directed reasoning about what to do. More specifically, Albert Bandura’s experiments concerning moral disengagement reveal that moral reasoning often does contribute effectively to the formation of moral judgments. And such reasoning need not be biased. Once this evidence is taken into account, it becomes clear that both cognition and affect can play a destructive as well as a constructive role in the formation of moral judgments.

The entire paper is here.

Tuesday, March 24, 2015

How stress influences our morality

By Lucius Caviola and Nadira Faulmüller
Academia.edu

Abstract

Several studies show that stress can influence moral judgment and behavior. In personal moral dilemmas—scenarios where someone has to be harmed by physical contact in order to save several others—participants under stress tend to make more deontological judgments than non-stressed participants, i.e. they agree less with harming someone for the greater good. Other studies demonstrate that stress can increase pro-social behavior for in-group members but decrease it for out-group members. The dual-process theory of moral judgment in combination with an evolutionary perspective on emotional reactions seems to explain these results: stress might inhibit controlled reasoning and trigger people’s automatic emotional intuitions. In other words, when it comes to morality, stress seems to make us prone to follow our gut reactions instead of our elaborate reasoning.

Friday, September 19, 2014

Using metacognitive cues to infer others’ thinking

André Mata and Tiago Almeida
Judgment and Decision Making 9.4 (Jul 2014): 349-359.

Abstract

Three studies tested whether people use cues about the way other people think--for example, whether others respond fast vs. slow--to infer what responses other people might give to reasoning problems. People who solve reasoning problems using deliberative thinking have better insight than intuitive problem-solvers into the responses that other people might give to the same problems. Presumably because deliberative responders think of intuitive responses before they think of deliberative responses, they are aware that others might respond intuitively, particularly in circumstances that hinder deliberative thinking (e.g., fast responding). Intuitive responders, on the other hand, are less aware of alternative responses to theirs, so they infer that other people respond as they do, regardless of the way others respond.

The entire article is here.

This article is important when contemplating ethical decision-making.