Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Intuitions. Show all posts
Showing posts with label Intuitions. Show all posts

Friday, February 17, 2023

Free Will Is Only an Illusion if You Are, Too

Alessandra Buccella and Tomáš Dominik
Scientific American
Originally posted January 16, 2023

Here is an excerpt:

In 2019 neuroscientists Uri Maoz, Liad Mudrik and their colleagues investigated that idea. They presented participants with a choice of two nonprofit organizations to which they could donate $1,000. People could indicate their preferred organization by pressing the left or right button. In some cases, participants knew that their choice mattered because the button would determine which organization would receive the full $1,000. In other cases, people knowingly made meaningless choices because they were told that both organizations would receive $500 regardless of their selection. The results were somewhat surprising. Meaningless choices were preceded by a readiness potential, just as in previous experiments. Meaningful choices were not, however. When we care about a decision and its outcome, our brain appears to behave differently than when a decision is arbitrary.

Even more interesting is the fact that ordinary people’s intuitions about free will and decision-making do not seem consistent with these findings. Some of our colleagues, including Maoz and neuroscientist Jake Gavenas, recently published the results of a large survey, with more than 600 respondents, in which they asked people to rate how “free” various choices made by others seemed. Their ratings suggested that people do not recognize that the brain may handle meaningful choices in a different way from more arbitrary or meaningless ones. People tend, in other words, to imagine all their choices—from which sock to put on first to where to spend a vacation—as equally “free,” even though neuroscience suggests otherwise.

What this tells us is that free will may exist, but it may not operate in the way we intuitively imagine. In the same vein, there is a second intuition that must be addressed to understand studies of volition. When experiments have found that brain activity, such as the readiness potential, precedes the conscious intention to act, some people have jumped to the conclusion that they are “not in charge.” They do not have free will, they reason, because they are somehow subject to their brain activity.

But that assumption misses a broader lesson from neuroscience. “We” are our brain. The combined research makes clear that human beings do have the power to make conscious choices. But that agency and accompanying sense of personal responsibility are not supernatural. They happen in the brain, regardless of whether scientists observe them as clearly as they do a readiness potential.

So there is no “ghost” inside the cerebral machine. But as researchers, we argue that this machinery is so complex, inscrutable and mysterious that popular concepts of “free will” or the “self” remain incredibly useful. They help us think through and imagine—albeit imperfectly—the workings of the mind and brain. As such, they can guide and inspire our investigations in profound ways—provided we continue to question and test these assumptions along the way.


Friday, January 6, 2023

Political sectarianism in America

Finkel, E. J., Bail, C. A., et al. (2020).
Science, 370(6516), 533–536.
https://doi.org/10.1126/science.abe1715

Abstract

Political polarization, a concern in many countries, is especially acrimonious in the United States (see the first box). For decades, scholars have studied polarization as an ideological matter—how strongly Democrats and Republicans diverge vis-à-vis political ideals and policy goals. Such competition among groups in the marketplace of ideas is a hallmark of a healthy democracy. But more recently, researchers have identified a second type of polarization, one focusing less on triumphs of ideas than on dominating the abhorrent supporters of the opposing party (1). This literature has produced a proliferation of insights and constructs but few interdisciplinary efforts to integrate them. We offer such an integration, pinpointing the superordinate construct of political sectarianism and identifying its three core ingredients: othering, aversion, and moralization. We then consider the causes of political sectarianism and its consequences for U.S. society—especially the threat it poses to democracy. Finally, we propose interventions for minimizing its most corrosive aspects.

(cut)

Here, we consider three avenues for intervention that hold particular promise for ameliorating political sectarianism. The first addresses people’s faulty perceptions or intuitions. For example, correcting misperceptions of opposing partisans, such as their level of hostility toward one’s copartisans, reduces sectarianism.  Such correction efforts can encourage people to engage in cross-party interactions (SM) or to consider their own positive experiences with opposing partisans, especially a friend, family
member, or neighbor. Doing so can reduce the role of motivated partisan reasoning in the formation of policy opinions.

A related idea is to instill intellectual humility, such as by asking people to explain policy preferences at a mechanistic level—for example, why do they favor their position on a national flat tax or on carbon emissions.  According to a recent study, relative to people assigned to the more lawyerly approach of justifying their preexisting policy preferences, those asked to provide mechanistic explanations gain appreciation for the complexities involved.

(cut)

From the end of the article:

Political sectarianism cripples a nation’s ability to confront challenges. Bolstering the emphasis on political ideas rather than political adversaries is not a sufficient solution, but it is likely to be a major step in the right direction. The interventions proposed above offer some promising leads, but any serious effort will require multifaceted efforts to change leadership, media, and democratic systems in ways that are sensitive to human psychology. There are no silver bullets.


A good reminder for psychologists and those involved in the mental health field.

Monday, March 28, 2022

Do people understand determinism? The tracking problem for measuring free will beliefs

Murray, S., Dykhuis, E., & Nadelhoffer, T.
(2022, February 8). 
https://doi.org/10.31234/osf.io/kyza7

Abstract

Experimental work on free will typically relies on using deterministic stimuli to elicit judgments of free will. We call this the Vignette-Judgment model. In this paper, we outline a problem with research based on this model. It seems that people either fail to respond to the deterministic aspects of vignettes when making judgments or that their understanding of determinism differs from researcher expectations. We provide some empirical evidence for a key assumption of the problem. In the end, we argue that people seem to lack facility with the concept of determinism, which calls into question the validity of experimental work operating under the Vignette-Judgment model. We also argue that alternative experimental paradigms are unlikely to elicit judgments that are philosophically relevant to questions about the metaphysics of free will.

Error and judgment

Our results show that people make several errors about deterministic stimuli used to elicit judgments about free will and responsibility. Many participants seem to conflate determinism with different  constructs  (bypassing  or  fatalism) or mistakenly interpret the implications of deterministic constraints on agents (intrusion).

Measures of item invariance suggest that participants were not responding differently to error measures across different vignettes. Hence, responses to error measures cannot be explained exclusively in terms of differences in vignettes, but rather seem to reflect participants’ mistaken judgments about determinism. Further, these mistakes are associated with significant differences in judgments about free will. Some of the patterns are predictable: participants who conflate determinism with bypassing attribute less free will to individuals in deterministic scenarios, while participants who import intrusion into deterministic scenarios attribute greater free will. This makes sense. As participants perceive mental states to be less causally efficacious or individuals as less ultimately in control of their decisions, free will is diminished. However, as people perceive more indeterminism, free will is amplified.

Additionally, we found that errors of intrusion are stronger than errors of bypassing or fatalism. Because bypassing errors are associated with diminished judgments of free will and intrusion errors are associated with amplified judgments, then, if all three errors were equal in strength, we would expect a linear relationship between different errors: individuals who make bypassing errors would have the lowest average judgments, individuals who make intrusion errors would have the highest average judgments, and people who make both errors would be in the middle (as both errors would cancel each other out). We did not observe this relationship. Instead, participants who make intrusion errors are statistically indistinguishable from each other, no matter what other kinds of errors they make.

Thus, errors of intrusion seem to trump others in the process of forming judgments of free will.  Thus, the errors people make are not incidentally related to their judgments. Instead, there are significant associations between people’s inferential errors about determinism and how they attribute free will and responsibility. This evidence supports our claim that people make several errors about the nature and implications of determinism.

Monday, March 22, 2021

The Mistrust of Science

Atul Gawande
The New Yorker
Originally posted 01 June 2016

Here is an excerpt:

The scientific orientation has proved immensely powerful. It has allowed us to nearly double our lifespan during the past century, to increase our global abundance, and to deepen our understanding of the nature of the universe. Yet scientific knowledge is not necessarily trusted. Partly, that’s because it is incomplete. But even where the knowledge provided by science is overwhelming, people often resist it—sometimes outright deny it. Many people continue to believe, for instance, despite massive evidence to the contrary, that childhood vaccines cause autism (they do not); that people are safer owning a gun (they are not); that genetically modified crops are harmful (on balance, they have been beneficial); that climate change is not happening (it is).

Vaccine fears, for example, have persisted despite decades of research showing them to be unfounded. Some twenty-five years ago, a statistical analysis suggested a possible association between autism and thimerosal, a preservative used in vaccines to prevent bacterial contamination. The analysis turned out to be flawed, but fears took hold. Scientists then carried out hundreds of studies, and found no link. Still, fears persisted. Countries removed the preservative but experienced no reduction in autism—yet fears grew. A British study claimed a connection between the onset of autism in eight children and the timing of their vaccinations for measles, mumps, and rubella. That paper was retracted due to findings of fraud: the lead author had falsified and misrepresented the data on the children. Repeated efforts to confirm the findings were unsuccessful. Nonetheless, vaccine rates plunged, leading to outbreaks of measles and mumps that, last year, sickened tens of thousands of children across the U.S., Canada, and Europe, and resulted in deaths.

People are prone to resist scientific claims when they clash with intuitive beliefs. They don’t see measles or mumps around anymore. They do see children with autism. And they see a mom who says, “My child was perfectly fine until he got a vaccine and became autistic.”

Now, you can tell them that correlation is not causation. You can say that children get a vaccine every two to three months for the first couple years of their life, so the onset of any illness is bound to follow vaccination for many kids. You can say that the science shows no connection. But once an idea has got embedded and become widespread, it becomes very difficult to dig it out of people’s brains—especially when they do not trust scientific authorities. And we are experiencing a significant decline in trust in scientific authorities.


5 years old, and still relevant.

Thursday, October 1, 2020

Intentional Action Without Knowledge

Vekony, R., Mele, A. & Rose, D.
Synthese (2020).

Abstract

In order to be doing something intentionally, must one know that one is doing it? Some philosophers have answered yes. Our aim is to test a version of this knowledge thesis, what we call the Knowledge/Awareness Thesis, or KAT. KAT states that an agent is doing something intentionally only if he knows that he is doing it or is aware that he is doing it. Here, using vignettes featuring skilled action and vignettes featuring habitual action, we provide evidence that, in various scenarios, a majority of non-specialists regard agents as intentionally doing things that the agents do not know they are doing and are not aware of doing. This puts pressure on proponents of KAT and leaves it to them to find a way these results can coexist with KAT.

Conclusion

Our aim was to evaluate KAT empirically. We found that majority responses to our vignettes
are at odds with KAT. Our results show that, on an ordinary view of matters, neither knowledge nor
awareness of doing something is necessary for doing it intentionally. We tested cases of skilled action
and habitual action, and we found that, for both, people ascribed intentionality to an action at an
appreciably higher rate than knowledge and awareness.

The research is here.

Friday, September 25, 2020

Science can explain other people’s minds, but not mine: self-other differences in beliefs about science

André Mata, Cláudia Simão & Rogério Gouveia
(2020) DOI: 10.1080/15298868.2020.1791950

Abstract

Four studies show that people differ in their lay beliefs concerning the degree to which science can explain their mind and the minds of other people. In particular, people are more receptive to the idea that the psychology of other people is explainable by science than to the possibility of science explaining their own psychology. This self-other difference is moderated by the degree to which people associate a certain mental phenomenon with introspection. Moreover, this self-other difference has implications for the science-recommended products and practices that people choose for themselves versus others.

General discussion

These  studies  suggest  that  people  have  different  beliefs  regarding  what  science  can explain  about the  way they  think  versus  the  way  other  people  think.  Study 1 showed that,  in  general, people  see  science  as  better  able  to  explain  the  psychology  of other people than their own, and that this is particularly the case when a certain psychological phenomenon is highly associated with introspection (though there were other significant moderators  in this  study, and  results were  not consistent  across dependent  variables). Study 2 replicated  this interaction, whereby  science is seen as  having a greater explanatory  power  for  other  people  than  for  oneself,  but  that  this  is  only  the  case  when introspection is involved. Whereas Studies 1–2 provided correlational evidence,  Study 3 provided  an  experimental  test  of  the  role  of  introspection  in  self-other  differences  in thinking about science and  what it  can explain.  The results lent clear support to those of the previous  studies: For highly introspective phenomena, people believe that  science is better  at  making sense  of others than  of  themselves, whereas  this self-other  difference disappears  when introspection  is not  thought  to  be  involved.  Finally,  Study  4  demonstrated that this self-other difference has implications in terms of the choices that people make  for  themselves  and  how they  differ  from  the  choices that  they  advise others  to make.  In  particular, people  are  more reluctant  to  try certain  products  and  procedures targeted  at areas  of  their mental  life  that are  highly associated  with  introspection, but they are less reluctant to advise other people to try those same products and procedures. Lending  additional  support  to  the  role  of  introspection  in  generating  this  self-other difference,  this  choice-advice  asymmetry  was  not  observed  for  areas  that  were  not associated with  introspection.

A pdf can be downloaded here.

Wednesday, June 10, 2020

Metacognition in moral decisions: judgment extremity and feeling of rightness in moral intuitions

Solange Vega and others
Thinking & Reasoning

This research investigated the metacognitive underpinnings of moral judgment. Participants in two studies were asked to provide quick intuitive responses to moral dilemmas and to indicate their feeling of rightness about those responses. Afterwards, participants were given extra time to rethink their responses, and change them if they so wished. The feeling of rightness associated with the initial judgments was predictive of whether participants chose to change their responses and how long they spent rethinking them. Thus, one’s metacognitive experience upon first coming up with a moral judgment influences whether one sticks to that initial gut feeling or decides to put more thought into it and revise it. Moreover, while the type of moral judgment (i.e., deontological vs. utilitarian) was not consistently predictive of metacognitive experience, the extremity of that judgment was: Extreme judgments (either deontological or utilitarian) were quicker and felt more right than moderate judgments.

From the General Discussion

Also consistent with Bago and De Neys’ findings (2018), these results show that few people revise their responses from one type of moral judgment to the other (i.e., from deontological to utilitarian, or vice-versa). Still,many people do revise their responses, though these are subtler revisions of extremity within one type of response. These results speak against the traditional corrective model, whereby people tend to change from deontological intuitions to utilitarian deliberations in the course of making moral judgments. At the same time, they suggest a more nuanced perspective than what one might conclude from Bago and De Neys’results that fewpeople revise their responses. In sum, few people make revisions in the kind of response they give, but many do revise the degree to which they defend a certain moral position.

The research is here.

Sunday, February 2, 2020

Empirical Work in Moral Psychology

 Joshua May
Routledge Encyclopedia of Philosophy

How do we form our moral judgments, and how do they influence behavior? What ultimately motivates kind versus malicious action? Moral psychology is the interdisciplinary study of such questions about the mental lives of moral agents, including moral thought, feeling, reasoning, and motivation. While these questions can be studied solely from the armchair or using only empirical tools, researchers in various disciplines, from biology to neuroscience to philosophy, can address them in tandem. Some key topics in this respect revolve around moral cognition and motivation, such as moral responsibility, altruism, the structure of moral motivation, weakness of will, and moral intuitions. Of course there are other important topics as well, including emotions, character, moral development, self-deception, addiction, well-being, and the evolution of moral capacities.

Some of the primary objects of study in moral psychology are the processes driving moral action. For example, we think of ourselves as possessing free will; as being responsible for what we do; as capable of self-control; and as capable of genuine concern for the welfare of others. Such claims can be tested by empirical methods to some extent in at least two ways. First, we can determine what in fact our ordinary thinking is. While many philosophers investigate this through rigorous reflection on concepts, we can also use the empirical methods of the social sciences. Second, we can investigate empirically whether our ordinary thinking is correct or illusory. For example, we can check the empirical adequacy of philosophical theories, assessing directly any claims made about how we think, feel, and behave.

Understanding the psychology of moral individuals is certainly interesting in its own right, but it also often has direct implications for other areas of ethics, such as metaethics and normative ethics. For instance, determining the role of reason versus sentiment in moral judgment and motivation can shed light on whether moral judgments are cognitive, and perhaps whether morality itself is in some sense objective. Similarly, evaluating moral theories, such as deontology and utilitarianism, often relies on intuitive judgments about what one ought to do in various hypothetical cases. Empirical research can again serve as a tool to determine what exactly our intuitions are and which psychological processes generate them, contributing to a rigorous evaluation of the warrant of moral intuitions.

The paper can be downloaded here.

Tuesday, November 12, 2019

Errors in Moral Forecasting: Perceptions of Affect Shape the Gap Between Moral Behaviors and Moral Forecasts

Teper, R., Zhong, C.‐B., and Inzlicht, M. (2015)
Social and Personality Psychology Compass, 9, 1– 14,
doi: 10.1111/spc3.12154

Abstract

Within the past decade, the field of moral psychology has begun to disentangle the mechanics behind moral judgments, revealing the vital role that emotions play in driving these processes. However, given the well‐documented dissociation between attitudes and behaviors, we propose that an equally important issue is how emotions inform actual moral behavior – a question that has been relatively ignored up until recently. By providing a review of recent studies that have begun to explore how emotions drive actual moral behavior, we propose that emotions are instrumental in fueling real‐life moral actions. Because research examining the role of emotional processes on moral behavior is currently limited, we push for the use of behavioral measures in the field in the hopes of building a more complete theory of real‐life moral behavior.

Conclusion

Long gone are the days when emotion was written off as a distractor or a roadblock to effective moral decision making. There now exists a great deal of evidence bolstering the idea that emotions are actually necessary for initiating adaptive behavior (Bechara, 2004; Damasio, 1994; Panskepp & Biven, 2012). Furthermore, evidence from the field of moral psychology points to the fact that individuals rely quite heavily on emotional and intuitive processes when engaging in moral judgments (e.g. Haidt, 2001). However, up until recently, the playing field of moral psychology has been heavily dominated by research revolving around moral judgments alone, especially when investigating the role that emotions play in motivating moral decision-making.

A pdf can be downloaded here.

Wednesday, September 18, 2019

Reasons or Rationalisations: The Role of Principles in the Moral Dumbfounding Paradigm

Cillian McHugh, Marek McGann, Eric Igou, & Elaine L. Kinsella 
PsyArXiv
Last edited August 15, 2019

Abstract

Moral dumbfounding occurs when people maintain a moral judgment even though they cannot provide reasons for it. Recently, questions have been raised about whether dumbfounding is a real phenomenon. Two reasons have been proposed as guiding the judgments of dumbfounded participants: harm-based reasons (believing an action may cause harm) or norm-based reasons (breaking a moral norm is inherently wrong). Participants who endorsed either reason were excluded from analysis, and instances of moral dumbfounding seemingly reduced to non-significance. We argue that endorsing a reason is not sufficient evidence that a judgment is grounded in that reason. Stronger evidence should additionally account for (a) articulating a given reason, and (b) consistently applying the reason in different situations. Building on this, we develop revised exclusion criteria across 2 studies. Study 1 included an open-ended response option immediately after the presentation of a moral scenario. Responses were coded for mention of harm-based or norm-based reasons. Participants were excluded from analysis if they both articulated and endorsed a given reason. Using these revised criteria for exclusion, we found evidence for dumbfounding, as measured by the selecting of an admission of not having reasons. Study 2 included a further three questions assessing the consistency with which people apply harm-based reasons. As predicted, few participants consistently applied, articulated, and endorsed harm-based reasons, and evidence for dumbfounding was found.

The research is here.

Tuesday, February 19, 2019

How Our Attitude Influences Our Sense Of Morality

Konrad Bocian
Science Trend
Originally posted January 18, 2019

Here is an excerpt:

People think that their moral judgment is as rational and objective as scientific statements, but science does not confirm that belief. Within the two last decades, scholars interested in moral psychology discovered that people produce moral judgments based on fast and automatic intuitions than rational and controlled reasoning. For example, moral cognition research showed that moral judgments arise in approximately 250 milliseconds, and even then we are not able to explain them. Developmental psychologists proved that at already the age of 3 months, babies who do not have any lingual skills can distinguish a good protagonist (a helping one) from a bad one (a hindering one). But this does not mean that peoples’ moral judgments are based solely on intuitions. We can use deliberative processes when conditions are favorable – when we are both motivated to engage in and capable of conscious responding.

When we imagine how we would morally judge other people in a specific situation, we refer to actual rules and norms. If the laws are violated, the act itself is immoral. But we forget that intuitive reasoning also plays a role in forming a moral judgment. It is easy to condemn the librarian when our interest is involved on paper, but the whole picture changes when real money is on the table. We have known that rule for a very long time, but we still forget to use it when we predict our moral judgments.

Based on previous research on the intuitive nature of moral judgment, we decided to test how far our attitudes can impact our perception of morality. In our daily life, we meet a lot of people who are to some degree familiar, and we either have a positive or negative attitude toward these people.

The info is here.

Monday, December 24, 2018

Your Intuition Is Wrong, Unless These 3 Conditions Are Met

Emily Zulz
www.thinkadvisor.com
Originally posted November 16, 2018

Here is an excerpt:

“Intuitions of master chess players when they look at the board [and make a move], they’re accurate,” he said. “Everybody who’s been married could guess their wife’s or their husband’s mood by one word on the telephone. That’s an intuition and it’s generally very good, and very accurate.”

According to Kahneman, who’s studied when one can trust intuition and when one cannot, there are three conditions that need to be met in order to trust one’s intuition.

The first is that there has to be some regularity in the world that someone can pick up and learn.

“So, chess players certainly have it. Married people certainly have it,” Kahnemen explained.

However, he added, people who pick stocks in the stock market do not have it.

“Because, the stock market is not sufficiently regular to support developing that kind of expert intuition,” he explained.

The second condition for accurate intuition is “a lot of practice,” according to Kahneman.

And the third condition is immediate feedback. Kahneman said that “you have to know almost immediately whether you got it right or got it wrong.”

The info is here.

Monday, May 14, 2018

No Luck for Moral Luck

Markus Kneer, University of Zurich Edouard Machery, University of Pittsburgh
Draft, March 2018

Abstract

Moral philosophers and psychologists often assume that people judge morally lucky and morally unlucky agents differently, an assumption that stands at the heart of the puzzle of moral luck. We examine whether the asymmetry is found for reflective intuitions regarding wrongness, blame, permissibility and punishment judgments, whether people's concrete, case-based judgments align with their explicit, abstract principles regarding moral luck, and what psychological mechanisms might drive the effect. Our experiments  produce three findings: First, in within-subjects experiments favorable to reflective deliberation, wrongness, blame, and permissibility judgments across different moral luck conditions are the same for the vast majority of people. The philosophical puzzle of moral luck, and the challenge to the very possibility of systematic ethics it is frequently taken to engender, thus simply does not arise. Second, punishment judgments are significantly more outcome-dependent than wrongness, blame, and permissibility  judgments. While this is evidence in favor of current dual-process theories of moral  judgment, the latter need to be qualified since punishment does not pattern with blame. Third, in between-subjects experiments, outcome has an effect on all four types of moral  judgments. This effect is mediated by negligence ascriptions and can ultimately be explained as due to differing probability ascriptions across cases.

The manuscript is here.

Tuesday, July 25, 2017

A new breed of scientist, with brains of silicon

John Bohannon
Science Magazine
Originally published July 5, 2017

Here is an excerpt:

But here’s the key difference: When the robots do finally discover the genetic changes that boost chemical output, they don’t have a clue about the biochemistry behind their effects.

Is it really science, then, if the experiments don’t deepen our understanding of how biology works? To Kimball, that philosophical point may not matter. “We get paid because it works, not because we understand why.”

So far, Hoffman says, Zymergen’s robotic lab has boosted the efficiency of chemical-producing microbes by more than 10%. That increase may not sound like much, but in the $160-billion-per-year sector of the chemical industry that relies on microbial fermentation, a fractional improvement could translate to more money than the entire $7 billion annual budget of the National Science Foundation. And the advantageous genetic changes that the robots find represent real discoveries, ones that human scientists probably wouldn’t have identified. Most of the output-boosting genes are not directly related to synthesizing the desired chemical, for instance, and half have no known function. “I’ve seen this pattern now in several different microbes,” Dean says. Finding the right genetic combinations without machine learning would be like trying to crack a safe with thousands of numbers on its dial. “Our intuitions are easily overwhelmed by the complexity,” he says.

The article is here.

Tuesday, May 16, 2017

Why are we reluctant to trust robots?

Jim Everett, David Pizarro and Molly Crockett
The Guardian
Originally posted April 27, 2017

Technologies built on artificial intelligence are revolutionising human life. As these machines become increasingly integrated in our daily lives, the decisions they face will go beyond the merely pragmatic, and extend into the ethical. When faced with an unavoidable accident, should a self-driving car protect its passengers or seek to minimise overall lives lost? Should a drone strike a group of terrorists planning an attack, even if civilian casualties will occur? As artificially intelligent machines become more autonomous, these questions are impossible to ignore.

There are good arguments for why some ethical decisions ought to be left to computers—unlike human beings, machines are not led astray by cognitive biases, do not experience fatigue, and do not feel hatred toward an enemy. An ethical AI could, in principle, be programmed to reflect the values and rules of an ideal moral agent. And free from human limitations, such machines could even be said to make better moral decisions than us. Yet the notion that a machine might be given free reign over moral decision-making seems distressing to many—so much so that, for some, their use poses a fundamental threat to human dignity. Why are we so reluctant to trust machines when it comes to making moral decisions? Psychology research provides a clue: we seem to have a fundamental mistrust of individuals who make moral decisions by calculating costs and benefits – like computers do.

The article is here.

Wednesday, December 28, 2016

Inference of trustworthiness from intuitive moral judgments

Everett JA., Pizarro DA., Crockett MJ.
Journal of Experimental Psychology: General, Vol 145(6), Jun 2016, 772-787.

Moral judgments play a critical role in motivating and enforcing human cooperation, and research on the proximate mechanisms of moral judgments highlights the importance of intuitive, automatic processes in forming such judgments. Intuitive moral judgments often share characteristics with deontological theories in normative ethics, which argue that certain acts (such as killing) are absolutely wrong, regardless of their consequences. Why do moral intuitions typically follow deontological prescriptions, as opposed to those of other ethical theories? Here, we test a functional explanation for this phenomenon by investigating whether agents who express deontological moral judgments are more valued as social partners. Across 5 studies, we show that people who make characteristically deontological judgments are preferred as social partners, perceived as more moral and trustworthy, and are trusted more in economic games. These findings provide empirical support for a partner choice account of moral intuitions whereby typically deontological judgments confer an adaptive function by increasing a person's likelihood of being chosen as a cooperation partner. Therefore, deontological moral intuitions may represent an evolutionarily prescribed prior that was selected for through partner choice mechanisms.

The article is here.

Thursday, July 7, 2016

The Mistrust of Science

By Atul Gawande
The New Yorker
Originally posted June 10, 2016

Here are two excerpts:

The scientific orientation has proved immensely powerful. It has allowed us to nearly double our lifespan during the past century, to increase our global abundance, and to deepen our understanding of the nature of the universe. Yet scientific knowledge is not necessarily trusted. Partly, that’s because it is incomplete. But even where the knowledge provided by science is overwhelming, people often resist it—sometimes outright deny it. Many people continue to believe, for instance, despite massive evidence to the contrary, that childhood vaccines cause autism (they do not); that people are safer owning a gun (they are not); that genetically modified crops are harmful (on balance, they have been beneficial); that climate change is not happening (it is).

(cut)

People are prone to resist scientific claims when they clash with intuitive beliefs. They don’t see measles or mumps around anymore. They do see children with autism. And they see a mom who says, “My child was perfectly fine until he got a vaccine and became autistic.”

Now, you can tell them that correlation is not causation. You can say that children get a vaccine every two to three months for the first couple years of their life, so the onset of any illness is bound to follow vaccination for many kids. You can say that the science shows no connection. But once an idea has got embedded and become widespread, it becomes very difficult to dig it out of people’s brains—especially when they do not trust scientific authorities. And we are experiencing a significant decline in trust in scientific authorities.

The article is here.

Tuesday, June 7, 2016

Student Resistance to Thought Experiments

Regina A. Rini
APA Newsletter - Teaching Philosophy
Spring 2016, Volume 15 (2)

Introduction

From Swampmen to runaway trolleys, philosophers make routine use of thought experiments. But our students are not always so enthusiastic. Most teachers of introductory philosophy will be familiar with the problem: students push back against the use of thought experiments, and not for the reasons that philosophers are likely to accept. Rather than challenge whether the thought experiments actually
support particular conclusions, students instead challenge their realism or their relevance.

In this article I will look at these sorts of challenges, with two goals in mind. First, there is a practical pedagogical goal: How do we guide students to overcome their resistance to a useful method? Second, there is something I will call “pedagogical bad faith.” Many of us actually do have sincere doubts, as professional philosophers, about the value of thought experiment methodology. Some of
these doubts in fact correspond to our students’ naïve resistance. But we often decide, for pedagogical reasons, to avoid mentioning our own doubts to students. Is this practice defensible?

The article is here.

Editor's Note: I agree with this article in many ways.  After I have read a philosophy article and a podcast using a thought experiment, I provided critiques regarding how the thought experiments were limited to the author. My criticisms were dismissed with a more ad hominem attack of my lack of understanding of philosophy or how philosophers work.  I was told I should read more philosophy, especially Derek Parfit.  I wish I had this article several years ago.

Thursday, May 26, 2016

Morality When the Mind is Unknowable

By Rita A. McNamara
Character and Content
Originally posted on May 2, 2016

Here is an excerpt:

Our ability to infer the presence and content of other minds is a fundamental building block underlying the intuitions about right and wrong that we use to navigate our social worlds. People living in Western societies often identify internal motives, dispositions, and desires as the causes of all human action. That these behavioral drivers are inside of another mind is not an issue because, in this Western model of mind, people can be read like books – observers can infer other people’s motives and desires and use these inferences to understand and predict behavior. Given this Western model of mind as an internally coherent, autonomous driver of action, the effort spent on determining whether Martin meant to harm Barras seems so obviously justified as to go without question. But this is not necessarily the case for all cultures.

In many societies, people focus far more on relational ties and polite observance of social duties than on internal mental states. On the other end of the cultural spectrum of mental state focus, some small-scale societies have ‘Opacity of Mind’ norms that directly prohibit inference about mental states. In contrast to the Western model of mind, these Opacity of Mind norms often suggest that it is either impossible to know what another person is thinking, or rude to intrude into others’ private mental space. So, while mental state reasoning is a key foundation for intuitions about right and wrong, these intuitions and mental state perceptions are also dependent upon cultural influences.

The information is here.

Wednesday, March 30, 2016

Most Popular Theories of Consciousness Are Worse Than Wrong

Michael Graziano
The Atlantic
Originally published March 9, 2016

Here is an excerpt:

In the modern age we can chuckle over medieval naiveté, but we often suffer from similar conceptual confusions. We have our share of phlegm theories, which flatter our intuitions while explaining nothing. They’re compelling, they often convince, but at a deeper level they’re empty.

One corner of science where phlegm theories proliferate is the cognitive neuroscience of consciousness. The brain is a machine that processes information, yet somehow we also have a conscious experience of at least some of that information. How is that possible? What is subjective experience? It’s one of the most important questions in science, possibly the most important, the deepest way of asking: What are we? Yet many of the current proposals, even some that are deep and subtle, are phlegm theories.

The article is here.