Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Heuristics. Show all posts
Showing posts with label Heuristics. Show all posts

Friday, March 31, 2023

Do conspiracy theorists think too much or too little?

N.M. Brashier
Current Opinion in Psychology
Volume 49, February 2023, 101504

Abstract

Conspiracy theories explain distressing events as malevolent actions by powerful groups. Why do people believe in secret plots when other explanations are more probable? On the one hand, conspiracy theorists seem to disregard accuracy; they tend to endorse mutually incompatible conspiracies, think intuitively, use heuristics, and hold other irrational beliefs. But by definition, conspiracy theorists reject the mainstream explanation for an event, often in favor of a more complex account. They exhibit a general distrust of others and expend considerable effort to find ‘evidence’ supporting their beliefs. In searching for answers, conspiracy theorists likely expose themselves to misleading information online and overestimate their own knowledge. Understanding when elaboration and cognitive effort might backfire is crucial, as conspiracy beliefs lead to political disengagement, environmental inaction, prejudice, and support for violence.

Implications

People who are drawn to conspiracy theories exhibit other stable traits – like lower cognitive ability, intuitive thinking, and proneness to cognitive biases – that suggest they are ‘lazy thinkers.’ On the other hand, conspiracy theorists also exhibit extreme levels of skepticism and expend energy justifying their beliefs; this effortful processing can ironically reinforce conspiracy beliefs. Thus, people carelessly fall down rabbit holes at some points (e.g., when reading repetitive conspiratorial claims) and methodically climb down at others (e.g., when initiating searches online). Conspiracy theories undermine elections, threaten the environment, and harm human health, so it is vitally important that interventions aimed at increasing evaluation and reducing these beliefs do not inadvertently backfire.

Friday, December 2, 2022

Rational use of cognitive resources in human planning

Callaway, F., van Opheusden, B., Gul, S. et al. 
Nat Hum Behav 6, 1112–1125 (2022).
https://doi.org/10.1038/s41562-022-01332-8

Abstract

Making good decisions requires thinking ahead, but the huge number of actions and outcomes one could consider makes exhaustive planning infeasible for computationally constrained agents, such as humans. How people are nevertheless able to solve novel problems when their actions have long-reaching consequences is thus a long-standing question in cognitive science. To address this question, we propose a model of resource-constrained planning that allows us to derive optimal planning strategies. We find that previously proposed heuristics such as best-first search are near optimal under some circumstances but not others. In a mouse-tracking paradigm, we show that people adapt their planning strategies accordingly, planning in a manner that is broadly consistent with the optimal model but not with any single heuristic model. We also find systematic deviations from the optimal model that might result from additional cognitive constraints that are yet to be uncovered.

Discussion

In this paper, we proposed a rational model of resource-constrained planning and compared the predictions of the model to human behaviour in a process-tracing paradigm. Our results suggest that human planning strategies are highly adaptive in ways that previous models cannot capture. In Experiment 1, we found that the optimal planning strategy in a generic environment resembled best-first search with a relative stopping rule. Participant behaviour was also consistent with such a strategy. However, the optimal planning strategy depends on the structure of the environment. Thus, in Experiments 2 and 3, we constructed six environments in which the optimal strategy resembled different classical search algorithms (best-first, breadth-first, depth-first and backward search). In each case, participant behaviour matched the environment-appropriate algorithm, as the optimal model predicted.

The idea that people use heuristics that are jointly adapted to environmental structure and computational limitations is not new. First popularized by Herbert Simon, it has more recently been championed in ecological rationality, which generally takes the approach of identifying computationally frugal heuristics that make accurate choices in certain environments. However, while ecological rationality explicitly rejects the notion of optimality, our approach embraces it, identifying heuristics that maximize an objective function that includes both external utility and internal cognitive cost. Supporting our approach, we found that the optimal model explained human planning behaviour better than flexible combinations of previously proposed planning heuristics in seven of the eight environments we considered (Supplementary Table 1).

Sunday, October 23, 2022

Advancing theorizing about fast-and-slow thinking

De Neys, W. (2022). 
Behavioral and Brain Sciences, 1-68. 
doi:10.1017/S0140525X2200142X

Abstract

Human reasoning is often conceived as an interplay between a more intuitive and deliberate thought process. In the last 50 years, influential fast-and-slow dual process models that capitalize on this distinction have been used to account for numerous phenomena—from logical reasoning biases, over prosocial behavior, to moral decision-making. The present paper clarifies that despite the popularity, critical assumptions are poorly conceived. My critique focuses on two interconnected foundational issues: the exclusivity and switch feature. The exclusivity feature refers to the tendency to conceive intuition and deliberation as generating unique responses such that one type of response is assumed to be beyond the capability of the fast-intuitive processing mode. I review the empirical evidence in key fields and show that there is no solid ground for such exclusivity. The switch feature concerns the mechanism by which a reasoner can decide to shift between more intuitive and deliberate processing. I present an overview of leading switch accounts and show that they are conceptually problematic—precisely because they presuppose exclusivity. I build on these insights to sketch the groundwork for a more viable dual process architecture and illustrate how it can set a new research agenda to advance the field in the coming years.

Conclusion

In the last 50 years dual process models of thinking have moved to the center stage in research on human reasoning. These models have been instrumental for the initial exploration of human thinking in the cognitive sciences and related fields (Chater, 2018; De Neys, 2021). However, it is time to rethink foundational assumptions. Traditional dual process models have typically conceived intuition and deliberation as generating unique responses such that one type of response is exclusively tied to deliberation and is assumed to be beyond the reach of the intuitive system. I reviewed empirical evidence from key dual process applications that argued against this exclusivity feature. I also showed how exclusivity leads to conceptual complications when trying to explain how a reasoner switches between intuitive and deliberate reasoning. To avoid these complications, I sketched an elementary non-exclusive working model in which it is the activation strength of competing intuitions within System 1 that determines System 2 engagement. 

It will be clear that the working model is a starting point that will need to be further developed and specified. However, by avoiding the conceptual paradoxes that plague the traditional model, it presents a more viable basic architecture that can serve as theoretical groundwork to build future dual process models in various fields. In addition, it should at the very least force dual process theorists to specify more explicitly how they address the switch issue. In the absence of such specification, dual process models might continue to provide an appealing narrative but will do little to advance our understanding of the interaction between intuitive and deliberate— fast and slow—thinking. It is in this sense that I hope that the present paper can help to sketch the building blocks of a more judicious dual process future. 

Monday, July 18, 2022

The One That Got Away: Overestimation of Forgone Alternatives as a Hidden Source of Regret

Feiler, D., & Müller-Trede, J. (2022).
Psychological Science, 33(2), 314–324.
https://doi.org/10.1177/09567976211032657

Abstract

Past research has established that observing the outcomes of forgone alternatives is an important driver of regret. In this research, we predicted and empirically corroborated a seemingly opposite result: Participants in our studies were more likely to experience regret when they did not observe a forgone outcome than when it was revealed. Our prediction drew on two theoretical observations. First, feelings of regret frequently stem from comparing a chosen option with one’s belief about what the forgone alternative would have been. Second, when there are many alternatives to choose from under uncertainty, the perceived attractiveness of the almost-chosen alternative tends to exceed its reality. In four preregistered studies (Ns = 800, 599, 150, and 197 adults), we found that participants predictably overestimated the forgone path, and this overestimation caused undue regret. We discuss the psychological implications of this hidden source of regret and reconcile the ostensible contradiction with past research.

Statement of Relevance

Reflecting on our past decisions can often make us feel regret. Previous research suggests that feelings of regret stem from comparing the outcome of our chosen path with that of the unchosen path.  We present a seemingly contradictory finding: Participants in our studies were more likely to experience regret when they did not observe the forgone outcome than when they saw it. This effect arises because when there are many paths to choose from, and uncertainty exists about how good each would be, people tend to overestimate the almost-chosen path. An idealized view of the path not taken then becomes an unfair standard of comparison for the chosen path, which inflates feelings of regret. Excessive regret has been found to be associated with depression and anxiety, and our work suggests that there may be a hidden source of undue regret—overestimation of forgone paths—that may contribute to these problems.

The ending...

Finally, is overestimating the paths we do not take causing us too much regret? Although regret can have
benefits for experiential learning, it is an inherently negative emotion and has been found to be associated with depression and excessive anxiety (Kocovski et al., 2005; Markman & Miller, 2006; Roese et al., 2009). Because the regret in our studies was driven by biased beliefs, it may be excessive—after all, better-calibrated beliefs about forgone alternatives would cause less regret. Whether calibrating beliefs about forgone alternatives could also help in alleviating regret’s harmful psychological consequences is an important question for future research.


Important implications for psychotherapy....

Monday, November 29, 2021

People use mental shortcuts to make difficult decisions – even highly trained doctors delivering babies

Manasvini Singh
The Conversation
Originally published 14 OCT 21

Here is an excerpt:

Useful time-saver or dangerous bias?

A bias arising from a heuristic implies a deviation from an “optimal” decision. However, identifying the optimal decision in real life is difficult because you usually don’t know what could have been: the counterfactual. This is especially relevant in medicine.

Take the win-stay/lose-shift strategy, for example. There are other studies that show that after “bad” events, physicians switch strategies. Missing an important diagnosis makes physicians test more on subsequent patients. Experiencing complications with a drug makes the physician less likely to prescribe it again.

But from a learning perspective, it’s difficult to say that ordering a test after missing a diagnosis is a flawed heuristic. Ordering a test always increases the chance that the physician catches an important diagnosis. So it’s a useful heuristic in some instances – say, for example, the physician had been underordering tests before, or the patient or insurer prefers shelling out the extra money for the chance to detect a cancer early.

In my study, though, switching delivery modes after complications offers no documented guarantees of avoiding future complications. And there is the added consideration of the short- and long-term health consequences of delivery-mode choice for mother and baby. Further, people are generally less tolerant of having inappropriate medical procedures performed on them than they are of being the recipients of unnecessary tests.

Tweaking the heuristic

Can physicians’ reliance on heuristics be lessened? Possibly.

Decision support systems that assist physicians with important clinical decisions are gathering momentum in medicine, and could help doctors course-correct after emotional events such as delivery complications.

For example, such algorithms can be built into electronic health records and perform a variety of tasks: flag physician decisions that appear nonstandard, identify patients who could benefit from a particular decision, summarize clinical information in ways that make it easier for physicians to digest and so on. As long as physicians retain at least some autonomy, decision support systems can do just that – support doctors in making clinical decisions.

Nudges that unobtrusively encourage physicians to make certain decisions can be accomplished by tinkering with the way options are presented – what’s called “choice architecture.” They already work for other clinical decisions.

Sunday, April 4, 2021

4 widespread cognitive biases and how doctors can overcome them

Timothy M. Smith
American Medical Association
Originally posted 4 Feb 21

Here is an excerpt:

Four to look out for

Cognitive biases are worrisome for physicians because they can affect one’s ability to gather evidence, interpret evidence, take action and evaluate their decisions, the authors noted. Here are four biases that commonly surface in medicine.

Confirmation bias involves selectively gathering and interpretation evidence to conform with one’s beliefs, as well as neglecting evidence that contradicts them. An example is refusing to consider alternative diagnoses once an initial diagnosis has been established, even though data, such as laboratory results, might contradict it.

“This bias leads physicians to see what they want to see,” the authors wrote. “Since it occurs early in the treatment pathway, confirmation bias can lead to mistaken diagnoses being passed on to and accepted by other clinicians without their validity being questioned, a process referred to as diagnostic momentum."

Anchoring bias is much like confirmation bias and refers to the practice of prioritizing information and data that support one’s initial impressions of evidence, even when those impressions are incorrect. Imagine attributing a patient’s back pain to known osteoporosis without ruling out other potential causes.

Affect heuristic describes when a physician’s actions are swayed by emotional reactions instead of rational deliberation about risks and benefits. It is context or patient specific and can manifest when physician experiences positive or negative feelings toward a patient based on prior experiences.

Outcomes bias refers to the practice of believing that clinical results—good or bad—are always attributable to prior decisions, even if the physician has no valid reason to think this, preventing him from assimilating feedback to improve his performance.

“Although the relation between decisions and outcomes might seem intuitive, the outcome of a decision cannot be the sole determinant of its quality; that is, sometimes a good outcome can happen despite a poor clinical decision, and vice versa,” the authors wrote.

Thursday, October 29, 2020

Probabilistic Biases Meet the Bayesian Brain.

Chater N, et al.
Current Directions in Psychological Science. 
2020;29(5):506-512. 
doi:10.1177/0963721420954801

Abstract

In Bayesian cognitive science, the mind is seen as a spectacular probabilistic-inference machine. But judgment and decision-making (JDM) researchers have spent half a century uncovering how dramatically and systematically people depart from rational norms. In this article, we outline recent research that opens up the possibility of an unexpected reconciliation. The key hypothesis is that the brain neither represents nor calculates with probabilities but approximates probabilistic calculations by drawing samples from memory or mental simulation. Sampling models diverge from perfect probabilistic calculations in ways that capture many classic JDM findings, which offers the hope of an integrated explanation of classic heuristics and biases, including availability, representativeness, and anchoring and adjustment.

Introduction

Human probabilistic reasoning gets bad press. Decades of brilliant experiments, most notably by Daniel Kahneman and Amos Tversky (e.g., Kahneman, 2011; Kahneman, Slovic, & Tversky, 1982), have shown a plethora of ways in which people get into a terrible muddle when wondering how probable things are. Every psychologist has learned about anchoring, conservatism, the representativeness heuristic, and many other ways that people reveal their probabilistic incompetence. Creating probability theory in the first place was incredibly challenging, exercising great mathematical minds over several centuries (Hacking, 1990). Probabilistic reasoning is hard, and perhaps it should not be surprising that people often do it badly. This view is the starting point for the whole field of judgment and decision-making (JDM) and its cousin, behavioral economics.

Oddly, though, human probabilistic reasoning equally often gets good press. Indeed, many psychologists, neuroscientists, and artificial-intelligence researchers believe that probabilistic reasoning is, in fact, the secret of human intelligence.

Tuesday, May 19, 2020

Uncovering the moral heuristics of altruism: A philosophical scale

Friedland J, Emich K, Cole BM (2020)
PLoS ONE 15(3): e0229124.
https://doi.org/10.1371/journal.pone.0229124

Abstract

Extant research suggests that individuals employ traditional moral heuristics to support their observed altruistic behavior; yet findings have largely been limited to inductive extrapolation and rely on relatively few traditional frames in so doing, namely, deontology in organizational behavior and virtue theory in law and economics. Given that these and competing moral frames such as utilitarianism can manifest as identical behavior, we develop a moral framing instrument—the Philosophical Moral-Framing Measure (PMFM)—to expand and distinguish traditional frames associated and disassociated with observed altruistic behavior. The validation of our instrument based on 1015 subjects in 3 separate real stakes scenarios indicates that heuristic forms of deontology, virtue-theory, and utilitarianism are strongly related to such behavior, and that egoism is an inhibitor. It also suggests that deontic and virtue-theoretical frames may be commonly perceived as intertwined and opens the door for new research on self-abnegation, namely, a perceived moral obligation toward suffering and self-denial. These findings hold the potential to inform ongoing conversations regarding organizational citizenship and moral crowding out, namely, how financial incentives can undermine altruistic behavior.

The research is here.

Monday, October 28, 2019

Dimensions of decision-making: An evidence-based classification of heuristics and biases

A. Ceschia and others
Personality and Individual Differences, 
Volume 146, 1 August 2019, Pages 188-200

Abstract

Traditionally, studies examining decision-making heuristics and biases (H&B) have focused on aggregate effects using between-subjects designs in order to demonstrate violations of rationality. Although H&B are often studied in isolation from others, emerging research has suggested that stable and reliable individual differences in rational thought exist, and similarity in performance across tasks are related, which may suggest an underlying phenotypic structure of decision-making skills. Though numerous theoretical and empirical classifications have been offered, results have been mixed. The current study aimed to clarify this research question. Participants (N = 289) completed a battery of 17 H&B tasks, assessed with a within-subjects design, that we selected based on a review of prior empirical and theoretical taxonomies. Exploratory and confirmatory analyses yielded a solution that suggested that these biases conform to a model composed of three dimensions: Mindware gaps, Valuation biases (i.e., Positive Illusions and Negativity effect), and Anchoring and Adjustment. We discuss these findings in relation to proposed taxonomies and existing studies on individual differences in decision-making.

A pdf of the research can be downloaded here.

Monday, November 12, 2018

Optimality bias in moral judgment

Julian De Freitas and Samuel G. B. Johnson
Journal of Experimental Social Psychology
Volume 79, November 2018, Pages 149-163

Abstract

We often make decisions with incomplete knowledge of their consequences. Might people nonetheless expect others to make optimal choices, despite this ignorance? Here, we show that people are sensitive to moral optimality: that people hold moral agents accountable depending on whether they make optimal choices, even when there is no way that the agent could know which choice was optimal. This result held up whether the outcome was positive, negative, inevitable, or unknown, and across within-subjects and between-subjects designs. Participants consistently distinguished between optimal and suboptimal choices, but not between suboptimal choices of varying quality — a signature pattern of the Efficiency Principle found in other areas of cognition. A mediation analysis revealed that the optimality effect occurs because people find suboptimal choices more difficult to explain and assign harsher blame accordingly, while moderation analyses found that the effect does not depend on tacit inferences about the agent's knowledge or negligence. We argue that this moral optimality bias operates largely out of awareness, reflects broader tendencies in how humans understand one another's behavior, and has real-world implications.

The research is here.

Monday, September 10, 2018

Cognitive Biases Tricking Your Brain

Ben Yagoda
The Atlantic
September 2018 Issue

Here is an excerpt:

Because biases appear to be so hardwired and inalterable, most of the attention paid to countering them hasn’t dealt with the problematic thoughts, judgments, or predictions themselves. Instead, it has been devoted to changing behavior, in the form of incentives or “nudges.” For example, while present bias has so far proved intractable, employers have been able to nudge employees into contributing to retirement plans by making saving the default option; you have to actively take steps in order to not participate. That is, laziness or inertia can be more powerful than bias. Procedures can also be organized in a way that dissuades or prevents people from acting on biased thoughts. A well-known example: the checklists for doctors and nurses put forward by Atul Gawande in his book The Checklist Manifesto.

Is it really impossible, however, to shed or significantly mitigate one’s biases? Some studies have tentatively answered that question in the affirmative. These experiments are based on the reactions and responses of randomly chosen subjects, many of them college undergraduates: people, that is, who care about the $20 they are being paid to participate, not about modifying or even learning about their behavior and thinking. But what if the person undergoing the de-biasing strategies was highly motivated and self-selected? In other words, what if it was me?

The info is here.

Wednesday, July 25, 2018

Heuristics and Public Policy: Decision Making Under Bounded Rationality

Sanjit Dhami, Ali al-Nowaihi, and Cass Sunstein
SSRN.com
Posted June 20, 2018

Abstract

How do human beings make decisions when, as the evidence indicates, the assumptions of the Bayesian rationality approach in economics do not hold? Do human beings optimize, or can they? Several decades of research have shown that people possess a toolkit of heuristics to make decisions under certainty, risk, subjective uncertainty, and true uncertainty (or Knightian uncertainty). We outline recent advances in knowledge about the use of heuristics and departures from Bayesian rationality, with particular emphasis on growing formalization of those departures, which add necessary precision. We also explore the relationship between bounded rationality and libertarian paternalism, or nudges, and show that some recent objections, founded on psychological work on the usefulness of certain heuristics, are based on serious misunderstandings.

The article can be downloaded here.

Friday, June 15, 2018

The danger of absolute thinking is absolutely clear

Mohammed Al-Mosaiwi
aeon.co
Originally posted May 2, 2018

Here is an excerpt:

There are generally two forms of absolutism; ‘dichotomous thinking’ and ‘categorical imperatives’. Dichotomous thinking – also referred to as ‘black-and-white’ or ‘all-or-nothing’ thinking – describes a binary outlook, where things in life are either ‘this’ or ‘that’, and nothing in between. Categorical imperatives are completely rigid demands that people place on themselves and others. The term is borrowed from Immanuel Kant’s deontological moral philosophy, which is grounded in an obligation- and rules-based ethical code.

In our research – and in clinical psychology more broadly – absolutist thinking is viewed as an unhealthy thinking style that disrupts emotion-regulation and hinders people from achieving their goals. Yet we all, to varying extents, are disposed to it – why is this? Primarily, because it’s much easier than dealing with the true complexities of life. The term cognitive miser, first introduced by the American psychologists Susan Fiske and Shelley Taylor in 1984, describes how humans seek the simplest and least effortful ways of thinking. Nuance and complexity is expensive – it takes up precious time and energy – so wherever possible we try to cut corners. This is why we have biases and prejudices, and form habits. It’s why the study of heuristics (intuitive ‘gut-feeling’ judgments) is so useful in behavioural economics and political science.

But there is no such thing as a free lunch; the time and energy saved through absolutist thinking has a cost. In order to successfully navigate through life, we need to appreciate nuance, understand complexity and embrace flexibility. When we succumb to absolutist thinking for the most important matters in our lives – such as our goals, relationships and self-esteem – the consequences are disastrous.

The article is here.

Wednesday, April 25, 2018

The Peter Principle: Promotions and Declining Productivity

Edward P. Lazear
Hoover Institution and Graduate School of Business
Revision 10/12/00

Abstract

Many have observed that individuals perform worse after having received a promotion. The
most famous statement of the idea is the Peter Principle, which states that people are promoted to
their level of incompetence. There are a number of possible explanations. Two are explored. The
most traditional is that the prospect of promotion provides incentives which vanish after the
promotion has been granted; thus, tenured faculty slack off. Another is that output as a statistical
matter is expected to fall. Being promoted is evidence that a standard has been met. Regression
to the mean implies that future productivity will decline on average. Firms optimally account for the
regression bias in making promotion decisions, but the effect is never eliminated. Both explanations
are analyzed. The statistical point always holds; the slacking off story holds only under certain
compensation structures.

The paper is here.

Monday, March 26, 2018

Non cogito, ergo sum

Ian Leslie
The Economist
Originally published May/June 2012

Here is an excerpt:

Researchers from Columbia Business School, New York, conducted an experiment in which people were asked to predict outcomes across a range of fields, from politics to the weather to the winner of “American Idol”. They found that those who placed high trust in their feelings made better predictions than those who didn’t. The result only applied, however, when the participants had some prior knowledge.

This last point is vital. Unthinking is not the same as ignorance; you can’t unthink if you haven’t already thought. Djokovic was able to pull off his wonder shot because he had played a thousand variations on it in previous matches and practice; Dylan’s lyrical outpourings drew on his immersion in folk songs, French poetry and American legends. The unconscious minds of great artists and sportsmen are like dense rainforests, which send up spores of inspiration.

The higher the stakes, the more overthinking is a problem. Ed Smith, a cricketer and author of “Luck”, uses the analogy of walking along a kerbstone: easy enough, but what if there was a hundred-foot drop to the street—every step would be a trial. In high-performance fields it’s the older and more successful performers who are most prone to choke, because expectation is piled upon them. An opera singer launching into an aria at La Scala cannot afford to think how her technique might be improved. When Federer plays a match point these days, he may feel as if he’s standing on the cliff edge of his reputation.

The article is here.

Thursday, November 16, 2017

Moral Hard-Wiring and Moral Enhancement

Introduction

In a series of papers (Persson & Savulescu 2008; 2010; 2011a; 2012a; 2013; 2014a) and book (Persson & Savulescu 2012b), we have argued that there is an urgent need to pursue research into the possibility of moral enhancement by biomedical means – e.g. by pharmaceuticals, non-invasive brain stimulation, genetic modification or other means directly modifying biology. The present time brings existential threats which human moral psychology, with its cognitive and moral limitations and biases, is unfit to address.  Exponentially increasing, widely accessible technological advance and rapid globalisation create threats of intentional misuse (e.g. biological or nuclear terrorism) and global collective action problems, such as the economic inequality between developed and developing countries and anthropogenic climate change, which human psychology is not set up to address. We have hypothesized that these limitations are the result of the evolutionary function of morality being to maximize the fitness of small cooperative groups competing for resources. Because these limitations of human moral psychology pose significant obstacles to coping with the current moral mega-problems, we argued that biomedical modification of human moral psychology may be necessary.  We have not argued that biomedical moral enhancement would be a single “magic
bullet” but rather that it could play a role in a comprehensive approach which also features cultural and social measures.

The paper is here.

Monday, October 2, 2017

The Role of a “Common Is Moral” Heuristic in the Stability and Change of Moral Norms

Lindström, B., Jangard, S., Selbing, I., & Olsson, A. (2017).
Journal of Experimental Psychology: General.

Abstract

Moral norms are fundamental for virtually all social interactions, including cooperation. Moral norms develop and change, but the mechanisms underlying when, and how, such changes occur are not well-described by theories of moral psychology. We tested, and confirmed, the hypothesis that the commonness of an observed behavior consistently influences its moral status, which we refer to as the common is moral (CIM) heuristic. In 9 experiments, we used an experimental model of dynamic social interaction that manipulated the commonness of altruistic and selfish behaviors to examine the change of peoples’ moral judgments. We found that both altruistic and selfish behaviors were judged as more moral, and less deserving of punishment, when common than when rare, which could be explained by a classical formal model (social impact theory) of behavioral conformity. Furthermore, judgments of common versus rare behaviors were faster, indicating that they were computationally more efficient. Finally, we used agent-based computer simulations to investigate the endogenous population dynamics predicted to emerge if individuals use the CIM heuristic, and found that the CIM heuristic is sufficient for producing 2 hallmarks of real moral norms; stability and sudden changes. Our results demonstrate that commonness shapes our moral psychology through mechanisms similar to behavioral conformity with wide implications for understanding the stability and change of moral norms.

The article is here.

Monday, August 28, 2017

Maintaining cooperation in complex social dilemmas using deep reinforcement learning

Adam Lerer and Alexander Peysakhovich
(2017)

Abstract

In social dilemmas individuals face a temptation to increase their payoffs in the short run at a cost to the long run total welfare. Much is known about how cooperation can be stabilized in the simplest of such settings: repeated Prisoner’s Dilemma games. However, there is relatively little work on generalizing these insights to more complex situations. We start to fill this gap by showing how to use modern reinforcement learning methods to generalize a highly successful Prisoner’s Dilemma strategy: tit-for-tat. We construct artificial agents that act in ways that are simple to understand, nice (begin by cooperating), provokable (try to avoid being exploited), and forgiving (following a bad turn try to return to mutual cooperation). We show both theoretically and experimentally that generalized tit-for-tat agents can maintain cooperation in more complex environments. In contrast, we show that employing purely reactive training techniques can lead to agents whose behavior results in socially inefficient outcomes.

The paper is here.

Thursday, June 8, 2017

The AI Cargo Cult: The Myth of Superhuman AI

Kevin Kelly
Backchannel.com
Originally posted April 25, 2017

Here is an excerpt:

The most common misconception about artificial intelligence begins with the common misconception about natural intelligence. This misconception is that intelligence is a single dimension. Most technical people tend to graph intelligence the way Nick Bostrom does in his book, Superintelligence — as a literal, single-dimension, linear graph of increasing amplitude. At one end is the low intelligence of, say, a small animal; at the other end is the high intelligence, of, say, a genius—almost as if intelligence were a sound level in decibels. Of course, it is then very easy to imagine the extension so that the loudness of intelligence continues to grow, eventually to exceed our own high intelligence and become a super-loud intelligence — a roar! — way beyond us, and maybe even off the chart.

This model is topologically equivalent to a ladder, so that each rung of intelligence is a step higher than the one before. Inferior animals are situated on lower rungs below us, while higher-level intelligence AIs will inevitably overstep us onto higher rungs. Time scales of when it happens are not important; what is important is the ranking—the metric of increasing intelligence.

The problem with this model is that it is mythical, like the ladder of evolution. The pre-Darwinian view of the natural world supposed a ladder of being, with inferior animals residing on rungs below human. Even post-Darwin, a very common notion is the “ladder” of evolution, with fish evolving into reptiles, then up a step into mammals, up into primates, into humans, each one a little more evolved (and of course smarter) than the one before it. So the ladder of intelligence parallels the ladder of existence. But both of these models supply a thoroughly unscientific view.

The article is here.

Wednesday, April 5, 2017

Root Out Bias from Your Decision-Making Process

Thomas C. Redman
Harvard Business Review
Originally posted March 10, 2017

Here is an excerpt:

Making good decisions involves hard work. Important decisions are made in the face of great uncertainty, and often under time pressure. The world is a complex place: People and organizations respond to any decision, working together or against one another, in ways that defy comprehension. There are too many factors to consider. There is rarely an abundance of relevant, trusted data that bears directly on the matter at hand. Quite the contrary — there are plenty of partially relevant facts from disparate sources — some of which can be trusted, some not — pointing in different directions.

With this backdrop, it is easy to see how one can fall into the trap of making the decision first and then finding the data to back it up later. It is so much faster. But faster is not the same as well-thought-out. Before you jump to a decision, you should ask yourself, “Should someone else who has time to assemble a complete picture make this decision?” If so, you should assign the decision to that person or team.

The article is here.