Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Intuition. Show all posts
Showing posts with label Intuition. Show all posts

Sunday, October 23, 2022

Advancing theorizing about fast-and-slow thinking

De Neys, W. (2022). 
Behavioral and Brain Sciences, 1-68. 
doi:10.1017/S0140525X2200142X

Abstract

Human reasoning is often conceived as an interplay between a more intuitive and deliberate thought process. In the last 50 years, influential fast-and-slow dual process models that capitalize on this distinction have been used to account for numerous phenomena—from logical reasoning biases, over prosocial behavior, to moral decision-making. The present paper clarifies that despite the popularity, critical assumptions are poorly conceived. My critique focuses on two interconnected foundational issues: the exclusivity and switch feature. The exclusivity feature refers to the tendency to conceive intuition and deliberation as generating unique responses such that one type of response is assumed to be beyond the capability of the fast-intuitive processing mode. I review the empirical evidence in key fields and show that there is no solid ground for such exclusivity. The switch feature concerns the mechanism by which a reasoner can decide to shift between more intuitive and deliberate processing. I present an overview of leading switch accounts and show that they are conceptually problematic—precisely because they presuppose exclusivity. I build on these insights to sketch the groundwork for a more viable dual process architecture and illustrate how it can set a new research agenda to advance the field in the coming years.

Conclusion

In the last 50 years dual process models of thinking have moved to the center stage in research on human reasoning. These models have been instrumental for the initial exploration of human thinking in the cognitive sciences and related fields (Chater, 2018; De Neys, 2021). However, it is time to rethink foundational assumptions. Traditional dual process models have typically conceived intuition and deliberation as generating unique responses such that one type of response is exclusively tied to deliberation and is assumed to be beyond the reach of the intuitive system. I reviewed empirical evidence from key dual process applications that argued against this exclusivity feature. I also showed how exclusivity leads to conceptual complications when trying to explain how a reasoner switches between intuitive and deliberate reasoning. To avoid these complications, I sketched an elementary non-exclusive working model in which it is the activation strength of competing intuitions within System 1 that determines System 2 engagement. 

It will be clear that the working model is a starting point that will need to be further developed and specified. However, by avoiding the conceptual paradoxes that plague the traditional model, it presents a more viable basic architecture that can serve as theoretical groundwork to build future dual process models in various fields. In addition, it should at the very least force dual process theorists to specify more explicitly how they address the switch issue. In the absence of such specification, dual process models might continue to provide an appealing narrative but will do little to advance our understanding of the interaction between intuitive and deliberate— fast and slow—thinking. It is in this sense that I hope that the present paper can help to sketch the building blocks of a more judicious dual process future. 

Saturday, April 9, 2022

Deciding to be authentic: Intuition is favored over deliberation when authenticity matters

K. Oktar & T. Lombrozo
Cognition
Volume 223, June 2022, 105021

Abstract

Deliberative analysis enables us to weigh features, simulate futures, and arrive at good, tractable decisions. So why do we so often eschew deliberation, and instead rely on more intuitive, gut responses? We propose that intuition might be prescribed for some decisions because people's folk theory of decision-making accords a special role to authenticity, which is associated with intuitive choice. Five pre-registered experiments find evidence in favor of this claim. In Experiment 1 (N = 654), we show that participants prescribe intuition and deliberation as a basis for decisions differentially across domains, and that these prescriptions predict reported choice. In Experiment 2 (N = 555), we find that choosing intuitively vs. deliberately leads to different inferences concerning the decision-maker's commitment and authenticity—with only inferences about the decision-maker's authenticity showing variation across domains that matches that observed for the prescription of intuition in Experiment 1. In Experiment 3 (N = 631), we replicate our prior results and rule out plausible confounds. Finally, in Experiment 4 (N = 177) and Experiment 5 (N = 526), we find that an experimental manipulation of the importance of authenticity affects the prescribed role for intuition as well as the endorsement of expert human or algorithmic advice. These effects hold beyond previously recognized influences on intuitive vs. deliberative choice, such as computational costs, presumed reliability, objectivity, complexity, and expertise.

From the Discussion section

Our theory and results are broadly consistent with prior work on cross-domain variation in processing preferences (e.g., Inbar et al., 2010), as well as work showing that people draw social inferences from intuitive decisions (e.g., Tetlock, 2003). However, we bridge and extend these literatures by relating inferences made on the basis of an individual's decision to cross-domain variation in the prescribed roles of intuition and deliberation. Importantly, our work is unique in showing that neither judgments about how decisions ought to be made, nor inferences from decisions, are fully reducible to considerations of differential processing costs or the reliability of a given process for the case at hand. Our stimuli—unlike those used in prior work (e.g., Inbar et al., 2010; Pachur & Spaar, 2015)—involved deliberation costs that had already been incurred at the time of decision, yet participants nevertheless displayed substantial and systematic cross-domain variation in their inferences, processing judgments, and eventual decisions. Most dramatically, our matched-information scenarios in Experiment 3 ensured that effects were driven by decision basis alone. In addition to excluding the computational costs of deliberation and matching the decision to deliberate, these scenarios also matched the evidence available concerning the quality of each choice. Nonetheless, decisions that were based on intuition vs. deliberation were judged differently along a number of dimensions, including their authenticity.

Tuesday, July 13, 2021

Valence framing effects on moral judgments: A meta-analysis

McDonald, K., et al.
Cognition
Volume 212, July 2021, 104703

Abstract

Valence framing effects occur when participants make different choices or judgments depending on whether the options are described in terms of their positive outcomes (e.g. lives saved) or their negative outcomes (e.g. lives lost). When such framing effects occur in the domain of moral judgments, they have been taken to cast doubt on the reliability of moral judgments and raise questions about the extent to which these moral judgments are self-evident or justified in themselves. One important factor in this debate is the magnitude and variability of the extent to which differences in framing presentation impact moral judgments. Although moral framing effects have been studied by psychologists, the overall strength of these effects pooled across published studies is not yet known. Here we conducted a meta-analysis of 109 published articles (contributing a total of 146 unique experiments with 49,564 participants) involving valence framing effects on moral judgments and found a moderate effect (d = 0.50) among between-subjects designs as well as several moderator variables. While we find evidence for publication bias, statistically accounting for publication bias attenuates, but does not eliminate, this effect (d = 0.22). This suggests that the magnitude of valence framing effects on moral decisions is small, yet significant when accounting for publication bias.

Sunday, July 11, 2021

It just feels right: an account of expert intuition

Fridland, E., & Stichter, M. 
Synthese (2020). 
https://doi.org/10.1007/s11229-020-02796-9

Abstract

One of the hallmarks of virtue is reliably acting well. Such reliable success presupposes that an agent (1) is able to recognize the morally salient features of a situation, and the appropriate response to those features and (2) is motivated to act on this knowledge without internal conflict. Furthermore, it is often claimed that the virtuous person can do this (3) in a spontaneous or intuitive manner. While these claims represent an ideal of what it is to have a virtue, it is less clear how to make good on them. That is, how is it actually possible to spontaneously and reliably act well? In this paper, we will lay out a framework for understanding how it is that one could reliably act well in an intuitive manner. We will do this by developing the concept of an action schema, which draws on the philosophical and psychological literature on skill acquisition and self-regulation. In short, we will give an account of how self-regulation, grounded in skillful structures, can allow for the accurate intuitions and flexible expertise required for virtue. While our primary goal in this paper is to provide a positive theory of how virtuous intuitions might be accounted for, we also take ourselves to be raising the bar for what counts as an explanation of reliable and intuitive action in general.

Conclusion

By thinking of skill and expertise as sophisticated forms of self-regulation, we are able to get a handle on intuition, generally, and on the ways in which reliably accurate intuition may develop in virtue, specifically. This gives us a way of explaining both the accuracy and immediacy of the virtuous person’s perception and intuitive responsiveness to a situation and it also gives us further reason to prefer a virtue as skill account of virtue. Moreover, such an approach gives us the resources to explain with some rigor and precision, the ways in which expert intuition can be accounted for, by appeal to action schemas. Lastly, our approach provides reason to think that expert intuition in the realm of virtue can indeed develop over time and with practice in a way that is flexible, controlled and intelligent. It lends credence to the view that virtue is learned and that we can act reliably and well by grounding our actions in expert intuition.

Tuesday, March 30, 2021

On Dual- and Single-Process Models of Thinking

De Neys W. On 
Perspectives on Psychological Science. 
February 2021. 
doi:10.1177/1745691620964172

Abstract

Popular dual-process models of thinking have long conceived intuition and deliberation as two qualitatively different processes. Single-process-model proponents claim that the difference is a matter of degree and not of kind. Psychologists have been debating the dual-process/single-process question for at least 30 years. In the present article, I argue that it is time to leave the debate behind. I present a critical evaluation of the key arguments and critiques and show that—contra both dual- and single-model proponents—there is currently no good evidence that allows one to decide the debate. Moreover, I clarify that even if the debate were to be solved, it would be irrelevant for psychologists because it does not advance the understanding of the processing mechanisms underlying human thinking.

Time to Move On

The dual vs single process model debate has not been resolved, it can be questioned whether the debate
can be resolved, and even if it were to be resolved, it will not inform our theory development about the critical processing mechanism underlying human thinking. This implies that the debate is irrelevant for the empirical study of thinking. In a sense the choice between a single and dual process model boils—quite literally—down to a choice between two different religions. Scholars can (and may) have different personal beliefs and preferences as to which model serves their conceptualizing and communicative goals best. However, what they cannot do is claim there are good empirical or theoretical scientific arguments to favor one over the other.

I do not contest that the single vs dual process model debate might have been useful in the past. For example, the relentless critique of single process proponents helped to discard the erroneous perfect feature alignment view. Likewise, the work of Evans and Stanovich in trying to pinpoint defining features was helpful to start sketching the descriptive building blocks of the mental simulation and cognitive decoupling process. Hence, I do believe that the debate has had some positive by-products. 

Tuesday, May 5, 2020

How stress influences our morality

Lucius Caviola and Nadira Faulmüller
Oxford Martin School

Abstract

Several studies show that stress can influence moral judgment and behavior. In personal moral dilemmas—scenarios where someone has to be harmed by physical contact in order to save several others—participants under stress tend to make more deontological judgments than nonstressed participants, i.e. they agree less with harming someone for the greater good. Other studies demonstrate that stress can increase pro-social behavior for in-group members but decrease it for out-group members. The dual-process theory of moral judgment in combination with an evolutionary perspective on emotional reactions seems to explain these results: stress might inhibit controlled reasoning and trigger people’s automatic emotional intuitions. In other words, when it comes to morality, stress seems to make us prone to follow our gut reactions instead of our elaborate reasoning.

From the Implications Section

The conclusions drawn from these studies seem to raise an important question: if our moral judgments are so dependent on stress, which of our judgments should we rely on—the ones elicited by stress or the ones we come to after careful consideration? Most people would probably not regard a physiological reaction, such as stress, as a relevant normative factor that should have a qualified influence on our moral values. Instead, our reflective moral judgments seem to represent better what we really care about. This should make us suspicious of the normative validity of emotional intuitions in general. Thus, in order to identify our moral values, we should not blindly follow our gut reactions, but try to think more deliberately about what we care about.

For example, as stated we might be more prone to help a poor beggar on the street when we are stressed. Here, even after careful reflection we might come to the conclusion that this emotional reaction elicited by stress is the morally right thing to do after all. However, in other situations this might not be the case. As we have seen we are less prone to donate money to charity when stressed (cf. Vinkers et al., 2013). But is this reaction really in line with what we consider to be the morally right thing to do after careful reflection? After all, if we care about the well-being of the single beggar, why then should the many more people’s lives, potentially benefiting from our donation, count less?

The research is here.

Sunday, February 16, 2020

Fast optimism, slow realism? Causal evidence for a two-step model of future thinking

Hallgeir Sjåstad and Roy F. Baumeister
PsyArXiv
Originally posted 6 Jan 20

Abstract

Future optimism is a widespread phenomenon, often attributed to the psychology of intuition. However, causal evidence for this explanation is lacking, and sometimes cautious realism is found. One resolution is that thoughts about the future have two steps: A first step imagining the desired outcome, and then a sobering reflection on how to get there. Four pre-registered experiments supported this two-step model, showing that fast predictions are more optimistic than slow predictions. The total sample consisted of 2,116 participants from USA and Norway, providing 9,036 predictions. In Study 1, participants in the fast-response condition thought positive events were more likely to happen and that negative events were less likely, as compared to participants in the slow-response condition. Although the predictions were optimistically biased in both conditions, future optimism was significantly stronger among fast responders. Participants in the fast-response condition also relied more on intuitive heuristics (CRT). Studies 2 and 3 focused on future health problems (e.g., getting a heart attack or diabetes), in which participants in the fast-response condition thought they were at lower risk. Study 4 provided a direct replication, with the additional finding that fast predictions were more optimistic only for the self (vs. the average person). The results suggest that when people think about their personal future, the first response is optimistic, which only later may be followed by a second step of reflective realism. Current health, income, trait optimism, perceived control and happiness were negatively correlated with health-risk predictions, but did not moderate the fast-optimism effect.

From the Discussion section:

Four studies found that people made more optimistic predictions when they relied on fast intuition rather than slow reflection. Apparently, a delay of 15 seconds is sufficient to enable second thoughts and a drop in future optimism. The slower responses were still "unrealistically optimistic"(Weinstein, 1980; Shepperd et al., 2013), but to a much lesser extent than the fast responses. We found this fast-optimism effect on relative comparison to the average person and isolated judgments of one's own likelihood, in two different languages across two different countries, and in one direct replication.All four experiments were pre-registered, and the total sample consisted of about 2,000 participants making more than 9,000 predictions.

Friday, December 20, 2019

Study offers first large-sample evidence of the effect of ethics training on financial sector behavior

Image result for business ethicsShannon Roddel
phys.org
Originally posted 21 Nov 19


Here is an excerpt:

"Behavioral ethics research shows that business people often do not recognize when they are making ethical decisions," he says. "They approach these decisions by weighing costs and benefits, and by using emotion or intuition."

These results are consistent with the exam playing a "priming" role, where early exposure to rules and ethics material prepares the individual to behave appropriately later. Those passing the exam without prior misconduct appear to respond most to the amount of rules and ethics material covered on their exam. Those already engaging in misconduct, or having spent several years working in the securities industry, respond least or not at all.

The study also examines what happens when people with more ethics training find themselves surrounded by bad behavior, revealing these individuals are more likely to leave their jobs.

"We study this effect both across organizations and within Wells Fargo, during their account fraud scandal," Kowaleski explains. "That those with more ethics training are more likely to leave misbehaving organizations suggests the self-reinforcing nature of corporate culture."

The info is here.

Monday, October 14, 2019

Principles of karmic accounting: How our intuitive moral sense balances rights and wrongs

Samuel Johnson and Jaye Ahn
PsyArXiv
Originally posted September 10, 2019

Abstract

We are all saints and sinners: Some of our actions benefit other people, while other actions harm people. How do people balance moral rights against moral wrongs when evaluating others’ actions? Across 9 studies, we contrast the predictions of three conceptions of intuitive morality—outcome- based (utilitarian), act-based (deontologist), and person-based (virtue ethics) approaches. Although good acts can partly offset bad acts—consistent with utilitarianism—they do so incompletely and in a manner relatively insensitive to magnitude, but sensitive to temporal order and the match between who is helped and harmed. Inferences about personal moral character best predicted blame judgments, explaining variance across items and across participants. However, there was modest evidence for both deontological and utilitarian processes too. These findings contribute to conversations about moral psychology and person perception, and may have policy implications.

General Discussion

These  studies  begin  to  map  out  the  principles  governing  how  the  mind  combines  rights  and wrongs to form summary judgments of blameworthiness. Moreover, these principles are explained by inferences  about  character,  which  also  explain  differences  across  scenarios  and  participants.  These results overall buttress person-based accounts of morality (Uhlmann et al., 2014), according to which morality  serves  primarily  to  identify  and  track  individuals  likely  to  be  cooperative  and  trustworthy social partners in the future.

These results also have implications for moral psychology beyond third-party judgments. Moral behavior is motivated largely by its expected reputational consequences, thus studying the psychology of  third-party  reputational  judgments  is  key  for  understanding  people’s  behavior  when  they  have opportunities  to  perform  licensing  or  offsetting acts.  For  example,  theories  of  moral  self-licensing (Merritt et al., 2010) disagree over whether licensing occurs due to moral credits (i.e., having done good, one can now “spend” the moral credit on a harm) versus moral credentials (i.e., having done good, later bad  acts  are  reframed  as  less  blameworthy). 

The research is here.

Wednesday, April 18, 2018

Is There A Difference Between Ethics And Morality In Business?

Bruce Weinstein
Forbes.com
Originally published February 23, 2018

Here is an excerpt:

In practical terms, if you use both “ethics” and “morality” in conversation, the people you’re speaking with will probably take issue with how you’re using these terms, even if they believe they’re distinct in some way.

The conversation will then veer from whatever substantive ethical point you were trying to make (“Our company has an ethical and moral responsibility to hire and promote only honest, accountable people”) to an argument about the meaning of the words “ethical” and “moral.” I had plenty of those arguments as a graduate student in philosophy, but is that the kind of discussion you really want to have at a team meeting or business conference?

You can do one of three things, then:

1. Use “ethics” and “morality” interchangeably only when you’re speaking with people who believe they’re synonymous.

2. Choose one term and stick with it.

3. Minimize the use of both words and instead refer to what each word is broadly about: doing the right thing, leading an honorable life and acting with high character.

As a professional ethicist, I’ve come to see #3 as the best option. That way, I don’t have to guess whether the person I’m speaking with believes ethics and morality are identical concepts, which is futile when you’re speaking to an audience of 5,000 people.

The information is here.

Note: I do not agree with everything in this article, but it is worth contemplating.

Tuesday, January 30, 2018

Your Brain Creates Your Emotions

Lisa Feldman Barrett
TED Talk
Published December 2017

Can you look at someone's face and know what they're feeling? Does everyone experience happiness, sadness and anxiety the same way? What are emotions anyway? For the past 25 years, psychology professor Lisa Feldman Barrett has mapped facial expressions, scanned brains and analyzed hundreds of physiology studies to understand what emotions really are. She shares the results of her exhaustive research -- and explains how we may have more control over our emotions than we think.

Wednesday, December 13, 2017

Moralized Rationality: Relying on Logic and Evidence in the Formation and Evaluation of Belief Can Be Seen as a Moral Issue

Tomas Ståhl, Maarten P. Zaal, and Linda J. Skitka
PLOS One
Published November 16, 2017

Abstract

In the present article we demonstrate stable individual differences in the extent to which a reliance on logic and evidence in the formation and evaluation of beliefs is perceived as a moral virtue, and a reliance on less rational processes is perceived as a vice. We refer to this individual difference variable as moralized rationality. Eight studies are reported in which an instrument to measure individual differences in moralized rationality is validated. Results show that the Moralized Rationality Scale (MRS) is internally consistent, and captures something distinct from the personal importance people attach to being rational (Studies 1–3). Furthermore, the MRS has high test-retest reliability (Study 4), is conceptually distinct from frequently used measures of individual differences in moral values, and it is negatively related to common beliefs that are not supported by scientific evidence (Study 5). We further demonstrate that the MRS predicts morally laden reactions, such as a desire for punishment, of people who rely on irrational (vs. rational) ways of forming and evaluating beliefs (Studies 6 and 7). Finally, we show that the MRS uniquely predicts motivation to contribute to a charity that works to prevent the spread of irrational beliefs (Study 8). We conclude that (1) there are stable individual differences in the extent to which people moralize a reliance on rationality in the formation and evaluation of beliefs, (2) that these individual differences do not reduce to the personal importance attached to rationality, and (3) that individual differences in moralized rationality have important motivational and interpersonal consequences.

The research is here.

Friday, November 17, 2017

Going with your gut may mean harsher moral judgments

Jeff Sossamon
www.futurity.org
Originally posted November 2, 2017

Going with your intuition could make you judge others’ moral transgressions more harshly and keep you from changing your mind, even after considering all the facts, a new study suggests.

The findings show that people who strongly rely on intuition automatically condemn actions they perceive to be morally wrong, even if there is no actual harm.

In psychology, intuition, or “gut instinct,” is defined as the ability to understand something immediately, without the need for reasoning.

“It is now widely acknowledged that intuitive processing influences moral judgment,” says Sarah Ward, a doctoral candidate in social and personality psychology at the University of Missouri.

“We thought people who were more likely to trust their intuition would be more likely to condemn things that are shocking, whereas people who don’t rely on gut feelings would not condemn these same actions as strongly,” Ward says.

Ward and Laura King, professor of psychological sciences, had study participants read through a series of scenarios and judge whether the action was wrong, such as an individual giving a gift to a partner that had previously been purchased for an ex.

The article is here.

Sunday, November 12, 2017

Why You Don’t See the Forest for the Trees When You Are Anxious: Anxiety Impairs Intuitive Decision Making

Carina Remmers and Thea Zander
Clinical Psychological Science
First Published September 27, 2017

Abstract

Intuitive decisions arise effortlessly from an unconscious, associative coherence detection process. Hereby, they guide people adaptively through everyday life decision making. When people are anxious, however, they often make poor decisions or no decision at all. Is intuition impaired in a state of anxiety? The aim of the current experiment was to examine this question in a between-subjects design. A total of 111 healthy participants were randomly assigned to an anxious, positive, or neutral multimodal mood induction after which they performed the established semantic coherence task. This task operationalizes intuition as the sudden, inexplicable detection of environmental coherence, based on automatic, unconscious processes of spreading activation. The current findings show that anxious participants showed impaired intuitive performance compared to participants of the positive and neutral mood groups. Trait anxiety did not moderate this effect. Accordingly, holistic, associative processes seem to be impaired by anxiety. Clinical implications and directions for future research are discussed.

The article is here.

Tuesday, October 31, 2017

Does Your Gut Always Steer You Right?

Elizabeth Bernstein
The Wall Street Journal
Originally published October 9, 2017

Here is an excerpt:

When should you trust your gut? Consult your gut for complex decisions.

These include important, but not life-or-death, choices such as what car to buy, where to move, which job offer to accept. Your conscious mind will have too much information to sort through, and there may not be one clear choice. For example, there’s a lot to consider when deciding on a new home: neighborhood (Close to work but not as fun? Farther away but nicer?), price, type of home (Condo or house?). Research shows that when people are given four choices of which car to buy or which apartment to rent—with slightly different characteristics to each—and then are distracted from consciously thinking about their decision, they make better choices. “Our conscious mind is not very good at having all these choices going on at once,” says Dr. Bargh. “When you let your mind work on this without paying conscious attention, you make a better decision.”

Using unconscious and conscious thought to make a decision is often best. And conscious thought should come first. An excellent way to do this is to make a list of the benefits and drawbacks of each choice you could make. We are trained in rational decision-making, so this will satisfy your conscious mind. And sometimes the list will be enough to show you a clear decision.

But if it isn’t, put it away and do something that absorbs your conscious mind. Go for a hike or run, walk on the beach, play chess, practice a musical instrument. (No vegging out in front of the TV; that’s too mind-numbing, experts say.) “Go into yourself without distractions from the outside, and your unconscious will keep working on the problem,” says Emeran Mayer, a gastroenterologist and neuroscientist and the author of “The Mind-Gut Connection” and a professor at UCLA’s David Geffen School of Medicine.

If the stakes are high, try to think rationally

Even if time is tight. For example, if your gut tells you to jump in front of a train to help someone who just fell on the tracks, that might be worth risking your life. If it’s telling you to jump in front of that train because you dropped your purse, it’s not. Your rational mind, not your gut, will know the difference, Dr. Bargh says.

The article is here.

Note: As usual, I don't agree with everything in this article.

Tuesday, August 1, 2017

Morality isn’t a compass — it’s a calculator

DB Krupp
The Conversation
Originally published July 9, 2017

Here is the conclusion:

Unfortunately, the beliefs that straddle moral fault lines are largely impervious to empirical critique. We simply embrace the evidence that supports our cause and deny the evidence that doesn’t. If strategic thinking motivates belief, and belief motivates reason, then we may be wasting our time trying to persuade the opposition to change their minds.

Instead, we should strive to change the costs and benefits that provoke discord in the first place. Many disagreements are the result of worlds colliding — people with different backgrounds making different assessments of the same situation. By closing the gap between their experiences and by lowering the stakes, we can bring them closer to consensus. This may mean reducing inequality, improving access to health care or increasing contact between unfamiliar groups.

We have little reason to see ourselves as unbiased sources of moral righteousness, but we probably will anyway. The least we can do is minimize that bias a bit.

The article is here.

Wednesday, June 7, 2017

On the cognitive (neuro)science of moral cognition: utilitarianism, deontology and the ‘fragmentation of value’

Alejandro Rosas
Working Paper: May 2017

Abstract

Scientific explanations of human higher capacities, traditionally denied to other animals, attract the attention of both philosophers and other workers in the humanities. They are often viewed with suspicion and skepticism. In this paper I critically examine the dual-process theory of moral judgment proposed by Greene and collaborators and the normative consequences drawn from that theory. I believe normative consequences are warranted, in principle, but I propose an alternative dual-process model of moral cognition that leads to a different normative consequence, which I dub ‘the fragmentation of value’. In the alternative model, the neat overlap between the deontological/utilitarian divide and the intuitive/reflective divide is abandoned. Instead, we have both utilitarian and deontological intuitions, equally fundamental and partially in tension. Cognitive control is sometimes engaged during a conflict between intuitions. When it is engaged, the result of control is not always utilitarian; sometimes it is deontological. I describe in some detail how this version is consistent with evidence reported by many studies, and what could be done to find more evidence to support it.

The working paper is here.

Tuesday, May 9, 2017

Inside Libratus, the Poker AI That Out-Bluffed the Best Humans

Cade Metz
Wired Magazine
Originally published February 1, 2017

Here is an excerpt:

Libratus relied on three different systems that worked together, a reminder that modern AI is driven not by one technology but many. Deep neural networks get most of the attention these days, and for good reason: They power everything from image recognition to translation to search at some of the world’s biggest tech companies. But the success of neural nets has also pumped new life into so many other AI techniques that help machines mimic and even surpass human talents.

Libratus, for one, did not use neural networks. Mainly, it relied on a form of AI known as reinforcement learning, a method of extreme trial-and-error. In essence, it played game after game against itself. Google’s DeepMind lab used reinforcement learning in building AlphaGo, the system that that cracked the ancient game of Go ten years ahead of schedule, but there’s a key difference between the two systems. AlphaGo learned the game by analyzing 30 million Go moves from human players, before refining its skills by playing against itself. By contrast, Libratus learned from scratch.

Through an algorithm called counterfactual regret minimization, it began by playing at random, and eventually, after several months of training and trillions of hands of poker, it too reached a level where it could not just challenge the best humans but play in ways they couldn’t—playing a much wider range of bets and randomizing these bets, so that rivals have more trouble guessing what cards it holds. “We give the AI a description of the game. We don’t tell it how to play,” says Noam Brown, a CMU grad student who built the system alongside his professor, Tuomas Sandholm. “It develops a strategy completely independently from human play, and it can be very different from the way humans play the game.”

The article is here.

Friday, March 3, 2017

Doctors suffer from the same cognitive distortions as the rest of us

Michael Lewis
Nautilus
Originally posted February 9, 2017

Here are two excerpts:

What struck Redelmeier wasn’t the idea that people made mistakes. Of course people made mistakes! What was so compelling is that the mistakes were predictable and systematic. They seemed ingrained in human nature. One passage in particular stuck with him—about the role of the imagination in human error. “The risk involved in an adventurous expedition, for example, is evaluated by imagining contingencies with which the expedition is not equipped to cope,” the authors wrote. “If many such difficulties are vividly portrayed, the expedition can be made to appear exceedingly dangerous, although the ease with which disasters are imagined need not reflect their actual likelihood. Conversely, the risk involved in an undertaking may be grossly underestimated if some possible dangers are either difficult to conceive of, or simply do not come to mind.” This wasn’t just about how many words in the English language started with the letter K. This was about life and death.

(cut)

Toward the end of their article in Science, Daniel Kahneman and Amos Tversky had pointed out that, while statistically sophisticated people might avoid the simple mistakes made by less savvy people, even the most sophisticated minds were prone to error. As they put it, “their intuitive judgments are liable to similar fallacies in more intricate and less transparent problems.” That, the young Redelmeier realized, was a “fantastic rationale why brilliant physicians were not immune to these fallibilities.” Error wasn’t necessarily shameful; it was merely human. “They provided a language and a logic for articulating some of the pitfalls people encounter when they think. Now these mistakes could be communicated. It was the recognition of human error. Not its denial. Not its demonization. Just the understanding that they are part of human nature.”

The article is here.