Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Planning. Show all posts
Showing posts with label Planning. Show all posts

Wednesday, September 13, 2023

Rational simplification and rigidity in human planning

Ho, M. K., Cohen, J. D., & Griffiths, T.
(2023, March 30). PsyArXiv


Planning underpins the impressive flexibility of goal-directed behavior. However, even when planning, people can display surprising rigidity in how they think about problems (e.g., “functional fixedness”) that lead them astray. How can our capacity for behavioral flexibility be reconciled with our susceptibility to conceptual inflexibility? We propose that these tendencies reflect avoidance of two cognitive costs: the cost of representing task details and the cost of switching between families of representations. To test this hypothesis, we developed a novel paradigm that affords participants opportunities to choose different families of simplified representations to plan. In two pre-registered online studies (N = 377; N = 294), we found that participants’ optimal behavior, suboptimal behavior, and reaction time are explained by a computational model that formalizes people’s avoidance of representational complexity and switching. These results demonstrate how the selection of simplified, rigid representations leads to the otherwise puzzling combination of flexibility and inflexibility observed in problem solving.

General Discussion

Here, we evaluated the hypothesis that functional fixedness reflects the avoidance of complexity and switching costs during planning. To do so, we developed a novel paradigm in which participants navigated mazes that could be represented simply as blocks or more complexly as blocks and notches. Experiments revealed that people simplify problems(for instance, by adopting a blocks-only construal strategy if navigating through notches was unnecessary) and that they persist in these strategies (for instance, continuing to ignore notches even when attending to a notch would lead to a better solution).  Additionally, our computational analyses using the value-guided construal framework (Ho et al., 2022)confirmed that the avoidance of complexity and  switching costs explains observed patterns of optimal behavior, suboptimal behavior, and reaction times under different experimental manipulations. Overall, these results support our proposal and  help  clarify  the  computational  principles  that  underlie functional fixedness.


The authors argue that people often simplify problems in order to make them more manageable, but this can lead to rigidity and suboptimal solutions.

The authors conclude that rational simplification is a common cognitive mechanism that can lead to both flexibility and rigidity in planning. They argue that the model provides a useful framework for understanding how people simplify problems and make decisions.

Here are some of the key takeaways from the article:
  • People often simplify problems in order to make them more manageable.
  • This can lead to rigidity and suboptimal solutions.
  • The tendency to simplify problems is a cognitive mechanism that can be explained by the limited capacity for representing task details.
  • The model provides a useful framework for understanding how people simplify problems and make decisions.

Saturday, August 19, 2023

Reverse-engineering the self

Paul, L., Ullman, T. D., De Freitas, J., & Tenenbaum, J.
(2023, July 8). PsyArXiv


To think for yourself, you need to be able to solve new and unexpected problems. This requires you to identify the space of possible environments you could be in, locate yourself in the relevant one, and frame the new problem as it exists relative to your location in this new environment. Combining thought experiments with a series of self-orientation games, we explore the way that intelligent human agents perform this computational feat by “centering” themselves: orienting themselves perceptually and cognitively in an environment, while simultaneously holding a representation of themselves as an agent in that environment. When faced with an unexpected problem, human agents can shift their perceptual and cognitive center from one location in a space to another, or “re-center”, in order to reframe a problem, giving them a distinctive type of cognitive flexibility. We define the computational ability to center (and re-center) as “having a self,” and propose that implementing this type of computational ability in machines could be an important step towards building a truly intelligent artificial agent that could “think for itself”. We then develop a conceptually robust, empirically viable, engineering-friendly implementation of our proposal, drawing on well established frameworks in cognition, philosophy, and computer science for thinking, planning, and agency.

The computational structure of the self is a key component of human intelligence. They propose a framework for reverse-engineering the self, drawing on work in cognition, philosophy, and computer science.

The authors argue that the self is a computational agent that is able to learn and think for itself. This agent has a number of key abilities, including:
  • The ability to represent the world and its own actions.
  • The ability to plan and make decisions.
  • The ability to learn from experience.
  • The ability to have a sense of self.
The authors argue that these abilities can be modeled as a POMDP, a type of mathematical model that is used to represent sequential decision-making processes in which the decision-maker does not have complete information about the environment. They propose a number of methods for reverse-engineering the self, including:
  • Using data from brain imaging studies to identify the neural correlates of self-related processes.
  • Using computational models of human decision-making to test hypotheses about the computational structure of the self.
  • Using philosophical analysis to clarify the nature of self-related concepts.
The authors argue that reverse-engineering the self is a promising approach to understanding human intelligence and developing artificial intelligence systems that are capable of thinking for themselves.

Friday, December 2, 2022

Rational use of cognitive resources in human planning

Callaway, F., van Opheusden, B., Gul, S. et al. 
Nat Hum Behav 6, 1112–1125 (2022).


Making good decisions requires thinking ahead, but the huge number of actions and outcomes one could consider makes exhaustive planning infeasible for computationally constrained agents, such as humans. How people are nevertheless able to solve novel problems when their actions have long-reaching consequences is thus a long-standing question in cognitive science. To address this question, we propose a model of resource-constrained planning that allows us to derive optimal planning strategies. We find that previously proposed heuristics such as best-first search are near optimal under some circumstances but not others. In a mouse-tracking paradigm, we show that people adapt their planning strategies accordingly, planning in a manner that is broadly consistent with the optimal model but not with any single heuristic model. We also find systematic deviations from the optimal model that might result from additional cognitive constraints that are yet to be uncovered.


In this paper, we proposed a rational model of resource-constrained planning and compared the predictions of the model to human behaviour in a process-tracing paradigm. Our results suggest that human planning strategies are highly adaptive in ways that previous models cannot capture. In Experiment 1, we found that the optimal planning strategy in a generic environment resembled best-first search with a relative stopping rule. Participant behaviour was also consistent with such a strategy. However, the optimal planning strategy depends on the structure of the environment. Thus, in Experiments 2 and 3, we constructed six environments in which the optimal strategy resembled different classical search algorithms (best-first, breadth-first, depth-first and backward search). In each case, participant behaviour matched the environment-appropriate algorithm, as the optimal model predicted.

The idea that people use heuristics that are jointly adapted to environmental structure and computational limitations is not new. First popularized by Herbert Simon, it has more recently been championed in ecological rationality, which generally takes the approach of identifying computationally frugal heuristics that make accurate choices in certain environments. However, while ecological rationality explicitly rejects the notion of optimality, our approach embraces it, identifying heuristics that maximize an objective function that includes both external utility and internal cognitive cost. Supporting our approach, we found that the optimal model explained human planning behaviour better than flexible combinations of previously proposed planning heuristics in seven of the eight environments we considered (Supplementary Table 1).

Tuesday, November 16, 2021

Decision Prioritization and Causal Reasoning in Decision Hierarchies

Zylberberg, A. (2021, September 6). 


From cooking a meal to finding a route to a destination, many real life decisions can be decomposed into a hierarchy of sub-decisions. In a hierarchy, choosing which decision to think about requires planning over a potentially vast space of possible decision sequences. To gain insight into how people decide what to decide on, we studied a novel task that combines perceptual decision making, active sensing and hierarchical and counterfactual reasoning. Human participants had to find a target hidden at the lowest level of a decision tree. They could solicit information from the different nodes of the decision tree to gather noisy evidence about the target's location. Feedback was given only after errors at the leaf nodes and provided ambiguous evidence about the cause of the error. Despite the complexity of task (with 10 to 7th power latent states) participants were able to plan efficiently in the task. A computational model of this process identified a small number of heuristics of low computational complexity that accounted for human behavior. These heuristics include making categorical decisions at the branching points of the decision tree rather than carrying forward entire probability distributions, discarding sensory evidence deemed unreliable to make a choice, and using choice confidence to infer the cause of the error after an initial plan failed. Plans based on probabilistic inference or myopic sampling norms could not capture participants' behavior. Our results show that it is possible to identify hallmarks of heuristic planning with sensing in human behavior and that the use of tasks of intermediate complexity helps identify the rules underlying human ability to reason over decision hierarchies.


Adaptive behavior requires making accurate decisions, but also knowing what decisions are worth making. To study how people decide what to decide on, we investigated a novel task in which people had to find a target, hidden at the lowest level of a decision tree, by gathering stochastic information from the internal nodes of the decision tree. Our central finding is that a small number of heuristic rules explain the participant’s behavior in this complex decision-making task. The study extends the perceptual decision framework to more complex decisions that comprise a hierarchy of sub-decisions of varying levels of difficulty, and where the decision maker has to actively decide which decision to address at any given time.  

Our task can be conceived as a sequence of binary decisions, or as one decision with eight alternatives.  Participants’ behavior supports the former interpretation.  Participants often performed multiple queries on the same node before descending levels, and they rarely made a transition from an internal node to a higher-level one before reaching a leaf node.  This indicates that participants made categorical decisions about the direction of motion at the visited nodes before they decided to descend levels. This bias toward resolving uncertainty locally was not observed in an approximately optimal policy (Fig. 8), and thus may reflect more general cognitive constraints that limit participants’ performance in our task (Markant et al., 2016). A strong candidate is the limited capacity of working memory (Miller, 1956). By reaching a categorical decision at each internal node, participants avoid the need to operate with full probability distributions over all task-relevant variables, favoring instead a strategy in which only the confidence about the motion choices is carried forward to inform future choices (Zylberberg et al., 2011).

Saturday, November 6, 2021

Generating Options and Choosing Between Them Depend on Distinct Forms of Value Representation

Morris, A., Phillips, J., Huang, K., & 
Cushman, F. (2021). 
Psychological Science. 


Humans have a remarkable capacity for flexible decision-making, deliberating among actions by modeling their likely outcomes. This capacity allows us to adapt to the specific features of diverse circumstances. In real-world decision-making, however, people face an important challenge: There are often an enormous number of possibilities to choose among, far too many for exhaustive consideration. There is a crucial, understudied prechoice step in which, among myriad possibilities, a few good candidates come quickly to mind. How do people accomplish this? We show across nine experiments (N = 3,972 U.S. residents) that people use computationally frugal cached value estimates to propose a few candidate actions on the basis of their success in past contexts (even when irrelevant for the current context). Deliberative planning is then deployed just within this set, allowing people to compute more accurate values on the basis of context-specific criteria. This hybrid architecture illuminates how typically valuable thoughts come quickly to mind during decision-making.

From the General Discussion

Salience effects, such as recency, frequency of consideration, and extremity, likely also contribute to consideration (Kahneman, 2003; Tversky & Kahneman, 1973). Our results supported at least one salience effect: In Studies 4 through 6, in addition to our primary effect of high cached value, options with more extreme cached values relative to the mean also tended to come to mind (see the checkmark shape in Fig. 3d). Salience effects such as this may have a functional basis, such as conserving scarce cognitive resources (Lieder et al., 2018). An ideal general theory would specify how these diverse factors—including many others, such as personality traits, social roles, and cultural norms (Smaldino & Richerson, 2012)—form a coherent, adaptive design for option generation.

A growing body of work suggests that value influences what comes to mind not only during decision-making but also in many other contexts, such as causal reasoning, moral judgment, and memory recall (Bear & Knobe, 2017; Braun et al., 2018; Hitchcock & Knobe, 2009; Mattar & Daw, 2018; Phillips et al., 2019). A key inquiry going forward will be the role of cached versus context-specific value estimation in these cases.

Wednesday, May 9, 2018

How To Deliver Moral Leadership To Employees

John Baldoni
Originally posted April 12, 2018

Here is an excerpt:

When it comes to moral authority there is a disconnect between what is expected and what is delivered. So what can managers do to fulfill their employees' expectations?

First, let’s cover what not to do – preach! Employees don’t want words; they want actions. They also do not expect to have to follow a particular religious creed at work. Just as with the separation of church and state, there is an implied separation in the workplace, especially now with employees of many different (or no) faiths. (There are exceptions within privately held, family-run businesses.)

LRN advocates doing two things: pause to reflect on the situation as a means of connecting with values and second act with humility. The former may be easier than the latter, but it is only with humility that leaders connect more realistically with others. If you act your title, you set up barriers to understanding. If you act as a leader, you open the door to greater understanding.

Dov Seidman, CEO of LRN, advises leaders to instill purpose, elevate and inspire individuals and live your values. Very importantly in this report, Seidman challenges leaders to embrace moral challenges as he says, by “constant wrestling with the questions of right and wrong, fairness and justice, and with ethical dilemmas.”

The information is here.

Sunday, January 3, 2016

Is It Immoral for Me to Dictate an Accelerated Death for My Future Demented Self?

By Norman L. Cantor
Harvard Law Blog
Originally posted December 2, 2015

I am obsessed with avoiding severe dementia. As a person who has always valued intellectual function, the prospect of lingering in a dysfunctional cognitive state is distasteful — an intolerable indignity. For me, such mental debilitation soils the remembrances to be left with my survivors and undermines the life narrative as a vibrant, thinking, and articulate figure that I assiduously cultivated. (Burdening others is also a distasteful prospect, but it is the vision of intolerable indignity that drives my planning of how to respond to a diagnosis of progressive dementia such as Alzheimers).


I suggest that while a demented persona no longer recalls the values underlying the AD and cannot now be offended by breaches of value-based instructions, those considered instructions are still worthy of respect. As noted, the well established mechanism — an AD – is intended to enable a person to govern the medical handling of their future demented self. And the values and principles underlying advance instructions can certainly include factors beyond the patient’s contemporaneous well being.

The entire blog post is here.

Sunday, November 22, 2015

A Driverless Car Dystopia? Technology and the Lives We Want to Live

By Anthony Painter
Originally published November 6, 2015

Here is an excerpt:

There needs to be a bigger public debate about the type of society we want, how technology can help us, and what institutions we need to help us all interface with the changes we are likely to see. Could block-chain, bitcoin and digital currencies help us spread new forms of collective ownership and give us more power over the public services we use? How do we find a sweet-spot where consumers and workers – and we are both - share equally in the benefits of the ‘sharing economy’? Is a universal Basic Income a necessary foundation for a world of varying frequency and diverse work arrangements and obligations to others such as elderly relatives and our kids? What do we want to be private and what are we happy to share with companies or the state? Should this be a security conversation or bigger question of ethics? How should we plan transport, housing, work and services around our needs and the types of lives we want to live in communities that have human worth?

The entire article is here.