Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Self-Control. Show all posts
Showing posts with label Self-Control. Show all posts

Friday, December 30, 2022

A new control problem? Humanoid robots, artificial intelligence, and the value of control

Nyholm, S. 
AI Ethics (2022).


The control problem related to robots and AI usually discussed is that we might lose control over advanced technologies. When authors like Nick Bostrom and Stuart Russell discuss this control problem, they write in a way that suggests that having as much control as possible is good while losing control is bad. In life in general, however, not all forms of control are unambiguously positive and unproblematic. Some forms—e.g. control over other persons—are ethically problematic. Other forms of control are positive, and perhaps even intrinsically good. For example, one form of control that many philosophers have argued is intrinsically good and a virtue is self-control. In this paper, I relate these questions about control and its value to different forms of robots and AI more generally. I argue that the more robots are made to resemble human beings, the more problematic it becomes—at least symbolically speaking—to want to exercise full control over these robots. After all, it is unethical for one human being to want to fully control another human being. Accordingly, it might be seen as problematic—viz. as representing something intrinsically bad—to want to create humanoid robots that we exercise complete control over. In contrast, if there are forms of AI such that control over them can be seen as a form of self-control, then this might be seen as a virtuous form of control. The “new control problem”, as I call it, is the question of under what circumstances retaining and exercising complete control over robots and AI is unambiguously ethically good.

From the Concluding Discussion section

Self-control is often valued as good in itself or as an aspect of things that are good in themselves, such as virtue, personal autonomy, and human dignity. In contrast, control over other persons is often seen as wrong and bad in itself. This means, I have argued, that if control over AI can sometimes be seen or conceptualized as a form of self-control, then control over AI can sometimes be not only instrumentally good, but in certain respects also good as an end in itself. It can be a form of extended self-control, and therefore a form of virtue, personal autonomy, or even human dignity.

In contrast, if there will ever be any AI systems that could properly be regarded as moral persons, then it would be ethically problematic to wish to be in full control over them, since it is ethically problematic to want to be in complete control over a moral person. But even before that, it might still be morally problematic to want to be in complete control over certain AI systems; it might be problematic if they are designed to look and behave like human beings. There can be, I have suggested, something symbolically problematic about wanting to be in complete control over an entity that symbolizes or represents something—viz. a human being—that it would be morally wrong and in itself bad to try to completely control.

For these reasons, I suggest that it will usually be a better idea to try to develop AI systems that can sensibly be interpreted as extensions of our own agency while avoiding developing robots that can be, imitate, or represent moral persons. One might ask, though, whether the two possibilities can ever come together, so to speak.

Think, for example, of the robotic copy that the Japanese robotics researcher Hiroshi Ishiguro has created of himself. It is an interesting question whether the agency of this robot could be seen as an extension of Ishiguro’s agency. The robot certainly represents or symbolizes Ishiguro. So, if he has control over this robot, then perhaps this can be seen as a form of extended agency and extended self-control. While it might seem symbolically problematic if Ishiguro wants to have complete control over the robot Erica that he has created, which looks like a human woman, it might not be problematic in the same way if he wants to have complete control over the robotic replica that he has created of himself. At least it would be different in terms of what it can be taken to symbolize or represent.

Friday, November 11, 2022

Moral disciplining: The cognitive and evolutionary foundations of puritanical morality

Fitouchi, L., André, J., & Baumard, N. (2022).
Behavioral and Brain Sciences, 1-71.


Why do many societies moralize apparently harmless pleasures, such as lust, gluttony, alcohol, drugs, and even music and dance? Why do they erect temperance, asceticism, sobriety, modesty, and piety as cardinal moral virtues? According to existing theories, this puritanical morality cannot be reduced to concerns for harm and fairness: it must emerge from cognitive systems that did not evolve for cooperation (e.g., disgust-based “Purity” concerns). Here, we argue that, despite appearances, puritanical morality is no exception to the cooperative function of moral cognition. It emerges in response to a key feature of cooperation, namely that cooperation is (ultimately) a long-term strategy, requiring (proximately) the self-control of appetites for immediate gratification. Puritanical moralizations condemn behaviors which, although inherently harmless, are perceived as indirectly facilitating uncooperative behaviors, by impairing the self-control required to refrain from cheating. Drinking, drugs, immodest clothing, and unruly music and dance, are condemned as stimulating short-term impulses, thus facilitating uncooperative behaviors (e.g., violence, adultery, free-riding). Overindulgence in harmless bodily pleasures (e.g., masturbation, gluttony) is perceived as making people slave to their urges, thus altering abilities to resist future antisocial temptations. Daily self-discipline, ascetic temperance, and pious ritual observance are perceived as cultivating the self-control required to honor prosocial obligations. We review psychological, historical, and ethnographic evidence supporting this account. We use this theory to explain the fall of puritanism in WEIRD societies, and discuss the cultural evolution of puritanical norms. Explaining puritanical norms does not require adding mechanisms unrelated to cooperation in our models of the moral mind.


Many societies develop apparently unnecessarily austere norms, depriving people from the harmless pleasures of life. In face of the apparent disconnect of puritanical values from cooperation, the latter have either been ignored by cooperation-centered theories of morality, or been explained by mechanisms orthogonal to cooperative challenges, such as concerns for the purity of the soul, rooted in disgust intuitions. We have argued for a theoretical reintegration of puritanical morality in the otherwise theoretically grounded and empirically supported perspective of morality as cooperation. For deep evolutionary reasons, cooperation as a long-term strategy requires resisting impulses for immediate pleasures. To protect cooperative interactions from the threat of temptation, many societies develop preemptive moralizations aimed at facilitating moral self-control. This may explain why, aside from values of fairness, reciprocity, solidarity or loyalty, many societies develop hedonically restrictive standards of sobriety, asceticism, temperance, modesty, piety, and self-discipline.

Tuesday, August 31, 2021

What Causes Unethical Behavior? A Meta-Analysis to Set an Agenda for Public Administration Research

Nicola Belle & Paola Cantarelli
Public Administration Review,
Vol. 77, Iss. 3, pp. 327–339


This article uses meta-analysis to synthesize 137 experiments in 73 articles on the causes of unethical behavior. Results show that exposure to in-group members who misbehave or to others who benefit from unethical actions, greed, egocentrism, self-justification, exposure to incremental dishonesty, loss aversion, challenging performance goals, or time pressure increase unethical behavior. In contrast, monitoring of employees, moral reminders, and individuals’ willingness to maintain a positive self-view decrease unethical conduct. Findings on the effect of self-control depletion on unethical behavior are mixed. Results also present subgroup analyses and several measures of study heterogeneity and likelihood of publication bias. The implications are of interest to both scholars and practitioners. The article concludes by discussing which of the factors analyzed should gain prominence in public administration research and uncovering several unexplored causes of unethical behavior.

From the Discussion

Among the factors that our meta-analyses identified as determinants of unethical behavior, the following may be elevated to prominence for public administration research and practice. First, results from the meta-analyses on social influences suggest that being exposed to corrupted colleagues may enhance the likelihood that one engages in unethical conduct. These findings are particularly relevant because “[c]orruption in the public sector hampers the efficiency of public services, undermines confidence in public institutions and increases the cost of public transactions” (OECD 2015 ). Moreover, corruption “may distort government ’ s public resource allocations” (Liu and Mikesell 2014 , 346). 

Friday, June 19, 2020

Better Minds, Better Morals A Procedural Guide to Better Judgment

Schaefer GO, Savulescu J.
J Posthum Stud. 2017;1(1):26‐43.


Making more moral decisions - an uncontroversial goal, if ever there was one. But how to go about it? In this article, we offer a practical guide on ways to promote good judgment in our personal and professional lives. We will do this not by outlining what the good life consists in or which values we should accept.Rather, we offer a theory of procedural reliability: a set of dimensions of thought that are generally conducive to good moral reasoning. At the end of the day, we all have to decide for ourselves what is good and bad, right and wrong. The best way to ensure we make the right choices is to ensure the procedures we're employing are sound and reliable. We identify four broad categories of judgment to be targeted - cognitive, self-management, motivational and interpersonal. Specific factors within each category are further delineated, with a total of 14 factors to be discussed. For each, we will go through the reasons it generally leads to more morally reliable decision-making, how various thinkers have historically addressed the topic, and the insights of recent research that can offer new ways to promote good reasoning. The result is a wide-ranging survey that contains practical advice on how to make better choices. Finally, we relate this to the project of transhumanism and prudential decision-making. We argue that transhumans will employ better moral procedures like these. We also argue that the same virtues will enable us to take better control of our own lives, enhancing our responsibility and enabling us to lead better lives from the prudential perspective.

A pdf is here.

Saturday, May 30, 2020

Self-Nudging and the Citizen Choice Architect

Samuli Reijula, Ralph Hertwig.
Behavioural Public Policy, 2020
DOI: 10.1017/bpp.2020.5


This article argues that nudges can often be turned into self-nudges: empowering interventions that enable people to design and structure their own decision environments—that is, to act as citizen choice architects. Self-nudging applies insights from behavioral science in a way that is practicable and cost-effective but that sidesteps concerns about paternalism or manipulation. It has the potential to expand the scope of application of behavioral insights from the public to the personal sphere (e.g., homes, offices, families). It is a tool for reducing failures of self-control and enhancing personal autonomy; specifically, self-nudging can mean designing one’s proximate choice architecture to alleviate the effects of self-control problems, engaging in education to understand the nature and causes of self-control problems, and employing simple educational nudges to improve goal attainment in various domains. It can even mean self-paternalistic interventions such as winnowing down one’s choice set by, for instance, removing options.  Policy makers could promote self-nudging by sharing knowledge about nudges and how they work. The ultimate goal of the self-nudging approach is to enable citizen choice architects’ efficient self-governance, where reasonable, and the self-determined arbitration of conflicts between their mutually exclusive goals and preferences.

From the Conclusion:

Commercial choice architects have become proficient in hijacking people’s attention and desires (see, e.g., Nestle 2013; Nestle 2015; Cross and Proctor 2014; Wu 2017), making it difficult for consumers to exercise agency and freedom of choice. Even in the best of circumstances, the potential for public choice architects to nudge people toward better choices in their personal and proximate choice environments is limited. Against this background, we suggest that policy makers should consider the possibility of empowering individuals to make strategic changes in their proximate choice architecture. There is no reason why citizens should not be informed about nudges that can be turned into self-nudges and, more generally, about the design principles of choice environments (e.g., defaults, framing, cognitive accessibility). We suggest that self-nudging is an untapped resource that sidesteps various ethical and practical problems associated with nudging and can empower people to make better everyday choices. This does not mean that regulation or nudging should be replaced by self-nudging; indeed, self-nudging can benefit enormously from the ingenuity of the nudging approach and the evidence accumulating on it. But, as the adage goes, give someone a fish, and you need them for a day. Teach someone to fish, and you feed them for a lifetime. We believe that sharing  behavioral insights from psychology and behavioral economics will provide citizens with a the citizen choice architect means for taking back power, giving them more control over the design of their proximate choice environments–in other words, qualifying them as citizen choice architects.

The article is here.

Sunday, November 10, 2019

For whom does determinism undermine moral responsibility? Surveying the conditions for free will across cultures

Ivar Hannikainen and others
PsyArXiv Preprints
Originally published October 15, 2019


Philosophers have long debated whether, if determinism is true, we should hold people morally responsible for their actions since in a deterministic universe, people are arguably not the ultimate source of their actions nor could they have done otherwise if initial conditions and the laws of nature are held fixed. To reveal how non-philosophers ordinarily reason about the conditions for free will, we conducted a cross-cultural and cross-linguistic survey (N = 5,268) spanning twenty countries and sixteen languages. Overall, participants tended to ascribe moral responsibility whether the perpetrator lacked sourcehood or alternate possibilities. However, for American, European, and Middle Eastern participants, being the ultimate source of one’s actions promoted perceptions of free will and control as well as ascriptions of blame and punishment. By contrast, being the source of one’s actions was not particularly salient to Asian participants. Finally, across cultures, participants exhibiting greater cognitive reflection were more likely to view free will as incompatible with causal determinism. We discuss these findings in light of documented cultural differences in the tendency toward dispositional versus situational attributions.

The research is here.

Saturday, June 22, 2019

Morality and Self-Control: How They are Intertwined, and Where They Differ

Wilhelm Hofmann, Peter Meindl, Marlon Mooijman, & Jesse Graham
PsyArXiv Preprints
Last edited November 18, 2018


Despite sharing conceptual overlap, morality and self-control research have led largely separate lives. In this article, we highlight neglected connections between these major areas of psychology. To this end, we first note their conceptual similarities and differences. We then show how morality research, typically emphasizing aspects of moral cognition and emotion, may benefit from incorporating motivational concepts from self-control research. Similarly, self-control research may benefit from a better understanding of the moral nature of many self-control domains. We place special focus on various components of self-control and on the ways in which self-control goals may be moralized.


Here is the Conclusion:

How do we resist temptation, prioritizing our future well-being over our present pleasure? And how do we resist acting selfishly, prioritizing the needs of others over our own self-interest? These two questions highlight the links between understanding self-control and understanding morality. We hope we have shown that morality and self-control share considerable conceptual overlap with regard to the way people regulate behavior in line with higher-order values and standards. As the psychological study of both areas becomes increasingly collaborative and integrated, insights from each subfield can better enable research and interventions to increase human health and flourishing.

The info is here.

Friday, April 5, 2019

Ordinary people associate addiction with loss of free will

A. J. Vonasch, C. J. Clark, S. Laub, K. D. Vohs, & R. F. Baumeister
Addictive Behaviors Reports
Volume 5, June 2017, Pages 56-66

It is widely believed that addiction entails a loss of free will, even though this point is controversial among scholars. There is arguably a downside to this belief, in that addicts who believe they lack the free will to quit an addiction might therefore fail to quit an addiction.

A correlational study tested the relationship between belief in free will and addiction. Follow-up studies tested steps of a potential mechanism: 1) people think drugs undermine free will 2) people believe addiction undermines free will more when doing so serves the self 3) disbelief in free will leads people to perceive various temptations as more addictive.

People with lower belief in free will were more likely to have a history of addiction to alcohol and other drugs, and also less likely to have successfully quit alcohol. People believe that drugs undermine free will, and they use this belief to self-servingly attribute less free will to their bad actions than to good ones. Low belief in free will also increases perceptions that things are addictive.

Addiction is widely seen as loss of free will. The belief can be used in self-serving ways that may undermine people's efforts to quit.

The research is here.

Friday, January 25, 2019

Decision-Making and Self-Governing Systems

Adina L. Roskies
October 2018, Volume 11, Issue 3, pp 245–257


Neuroscience has illuminated the neural basis of decision-making, providing evidence that supports specific models of decision-processes. These models typically are quite mechanical, the realization of abstract mathematical “diffusion to bound” models. While effective decision-making seems to be essential for sophisticated behavior, central to an account of freedom, and a necessary characteristic of self-governing systems, it is not clear how the simple models neuroscience inspires can underlie the notion of self-governance. Drawing from both philosophy and neuroscience I explore ways in which the proposed decision-making architectures can play a role in systems that can reasonably be thought of as “self-governing”.

Here is an excerpt:

The importance of prospection for self-governance cannot be underestimated. One example in which it promises to play an important role is in the exercise of and failures of self-control. Philosophers have long been puzzled by the apparent possibility of akrasia or weakness of will: choosing to act in ways that one judges not to be in one’s best interest. Weakness of will is thought to be an example of irrational choice. If one’s theory of choice is that one always decides to pursue the option that has the highest value, and that it is rational to choose what one most values, it is hard to explain irrational choices. Apparent cases of weakness of will would really be cases of mistaken valuation: overvaluing an option that is in fact not the most valuable option. And indeed, if one cannot rationally criticize the strength of desires (see Hume’s famous observation that “it is not against reason that I should prefer the destruction of half the world to the pricking of my little finger”), we cannot explain irrational choice.

The article is here.

Tuesday, August 14, 2018

The developmental and cultural psychology of free will

Tamar Kushnir
Philosophy Compass
Originally published July 12, 2018


This paper provides an account of the developmental origins of our belief in free will based on research from a range of ages—infants, preschoolers, older children, and adults—and across cultures. The foundations of free will beliefs are in infants' understanding of intentional action—their ability to use context to infer when agents are free to “do otherwise” and when they are constrained. In early childhood, new knowledge about causes of action leads to new abilities to imagine constraints on action. Moreover, unlike adults, young children tend to view psychological causes (i.e., desires) and social causes (i.e., following rules or group norms, being kind or fair) of action as constraints on free will. But these beliefs change, and also diverge across cultures, corresponding to differences between Eastern and Western philosophies of mind, self, and action. Finally, new evidence shows developmentally early, culturally dependent links between free will beliefs and behavior, in particular when choice‐making requires self‐control.

Here is part of the Conclusion:

I've argued here that free will beliefs are early‐developing and culturally universal, and that the folk psychology of free will involves considering actions in the context of alternative possibilities and constraints on possibility. There are developmental differences in how children reason about the possibility of acting against desires, and there are both developmental and cultural differences in how children consider the social and moral limitations on possibility.  Finally, there is new evidence emerging for developmentally early, culturally moderated links between free will beliefs and willpower, delay of gratification, and self‐regulation.

The article is here.

Wednesday, February 28, 2018

Why willpower is overrated

Brian Resnick
Originally published January 15, 2018

Here is an excerpt:

What we can learn from people who are good at self-control

So who are these people who are rarely tested by temptations? They’re doing something right. Recent research suggests a few lessons we can draw from them.

1) People who are better at self-control actually enjoy the activities some of us resist — like eating healthy, studying, or exercising.

So engaging in these activities isn’t a chore for them. It’s fun.

“‘Want to’ goals are more likely to be obtained than ‘have to’ goals,” Milyavskaya said in an interview last year. “Want-to goals lead to experiences of fewer temptations. It’s easier to pursue those goals. It feels more effortless.”

If you’re running because you “have to” get in shape but find running to be a miserable activity, you’re probably not going to keep it up. An activity you like is more likely to be repeated than an activity you hate.

2) People who are good at self-control have learned better habits.

In 2015, psychologists Brian Galla and Angela Duckworth published a paper in the Journal of Personality and Social Psychology, finding across six studies and more than 2,000 participants that people who are good at self-control also tend to have good habits — like exercising regularly, eating healthy, sleeping well, and studying.

“People who are good at self-control … seem to be structuring their lives in a way to avoid having to make a self-control decision in the first place,” Galla tells me. And structuring your life is a skill. People who do the same activity, like running or meditating, at the same time each day have an easier time accomplishing their goals, he says — not because of their willpower, but because the routine makes it easier.

The article is here.

Monday, November 13, 2017

Medical Evidence Debated

Ralph Bartholdt
Coeur d’Alene Press 
Originally posted October 27, 2017

Here is an excerpt:

“The point of this is not that he had a choice,” he said. “But what’s been loaded into his system, what’s he’s making the choices with.”

Thursday’s expert witness, psychologist Richard Adler, further developed the argument that Renfro suffered from a brain disorder evidenced by a series of photograph-like images of Renfro’s brain that showed points of trauma. He pointed out degeneration of white matter responsible for transmitting information from the front to the back of the brain, and shrunken portions on one side of the brain that were not symmetrical with their mirror images on the other side.

Physical evidence coinciding with the findings include Renfro’s choppy speech patterns and mannerisms as well inabilities to make cognitive connections, and his lack of social skills, Adler said.

Defense attorney Jay Logsdon asked if the images were obtained through a discredited method, one that has “been attacked as junk science?”

The method, called QEEG, for quantitative electroencephalogram, which uses electrical patterns that show electrical activity inside the brain’s cortex to determine impairment, was attacked in an article in 1997. The article’s criticism still stands today, Adler said.

Throughout the morning and into the afternoon, Adler reiterated findings, linking them to the defendant’s actions, and dovetailing them into other test results, psychological and cognitive, that have been conducted while Renfro has been incarcerated in the Kootenai County Jail.

The article is here.

Sunday, October 29, 2017

Courage and Compassion: Virtues in Caring for So-Called “Difficult” Patients

Michael Hawking, Farr A. Curlin, and John D. Yoon
AMA Journal of Ethics. April 2017, Volume 19, Number 4: 357-363.


What, if anything, can medical ethics offer to assist in the care of the “difficult” patient? We begin with a discussion of virtue theory and its application to medical ethics. We conceptualize the “difficult” patient as an example of a “moral stress test” that especially challenges the physician’s character, requiring the good physician to display the virtues of courage and compassion. We then consider two clinical vignettes to flesh out how these virtues might come into play in the care of “difficult” patients, and we conclude with a brief proposal for how medical educators might cultivate these essential character traits in physicians-in-training.

Here is an excerpt:

To give a concrete example of a virtue that will be familiar to anyone in medicine, consider the virtue of temperance. A temperate person exhibits appropriate self-control or restraint. Aristotle describes temperance as a mean between two extremes—in the case of eating, an extreme lack of temperance can lead to morbid obesity and its excess to anorexia. Intemperance is a hallmark of many of our patients, particularly among those with type 2 diabetes, alcoholism, or cigarette addiction. Clinicians know all too well the importance of temperance because they see the results for human beings who lack it—whether it be amputations and dialysis for the diabetic patient; cirrhosis, varices, and coagulopathy for the alcoholic patient; or chronic obstructive pulmonary disease and lung cancer for the lifelong smoker. In all of these cases, intemperance inhibits a person’s ability to flourish. These character traits do, of course, interact with social, cultural, and genetic factors in impacting an individual’s health, but a more thorough exploration of these factors is outside the scope of this paper.

The article is here.

Thursday, October 5, 2017

Leadership Takes Self-Control. Here’s What We Know About It

Kai Chi (Sam) Yam, Huiwen Lian, D. Lance Ferris, Douglas Brown
Harvard Business Review
Originally published June 5, 2017

Here is an excerpt:

Our review identified a few consequences that are consistently linked to having lower self-control at work:
  1. Increased unethical/deviant behavior: Studies have found that when self-control resources are low, nurses are more likely to be rude to patients, tax accountants are more likely to engage in fraud, and employees in general engage in various forms of unethical behavior, such as lying to their supervisors, stealing office supplies, and so on.
  2. Decreased prosocial behavior: Depleted self-control makes employees less likely to speak up if they see problems at work, less likely to help fellow employees, and less likely to engage in corporate volunteerism.
  3. Reduced job performance: Lower self-control can lead employees to spend less time on difficult tasks, exert less effort at work, be more distracted (e.g., surfing the internet in working time), and generally perform worse than they would had their self-control been normal.
  4. Negative leadership styles: Perhaps what’s most concerning is that leaders with lower self-control often exhibit counter-productive leadership styles. They are more likely to verbally abuse their followers (rather than using positive means to motivate them), more likely to build weak relationships with their followers, and they are less charismatic. Scholars have estimated that the cost to corporations in the United States for such a negative and abusive behavior is at $23.8 billion annually.
Our review makes clear that helping employees maintain self-control is an important task if organizations want to be more effective and ethical. Fortunately, we identified three key factors that can help leaders foster self-control among employees and mitigate the negative effects of losing self-control.

The article is here.

Wednesday, October 4, 2017

Better Minds, Better Morals: A Procedural Guide to Better Judgment

G. Owen Schaefer and Julian Savulescu
Journal of Posthuman Studies
Vol. 1, No. 1, Journal of Posthuman Studies (2017), pp. 26-43


Making more moral decisions – an uncontroversial goal, if ever there was one. But how to go about it? In this article, we offer a practical guide on ways to promote good judgment in our personal and professional lives. We will do this not by outlining what the good life consists in or which values we should accept. Rather, we offer a theory of  procedural reliability : a set of dimensions of thought that are generally conducive to good moral reasoning. At the end of the day, we all have to decide for ourselves what is good and bad, right and wrong. The best way to ensure we make the right choices is to ensure the procedures we’re employing are sound and reliable. We identify four broad categories of judgment to be targeted – cognitive, self-management, motivational and interpersonal. Specific factors within each category are further delineated, with a total of 14 factors to be discussed. For each, we will go through the reasons it generally leads to more morally reliable decision-making, how various thinkers have historically addressed the topic, and the insights of recent research that can offer new ways to promote good reasoning. The result is a wide-ranging survey that contains practical advice on how to make better choices. Finally, we relate this to the project of transhumanism and prudential decision-making. We argue that transhumans will employ better moral procedures like these. We also argue that the same virtues will enable us to take better control of our own lives, enhancing our responsibility and enabling us to lead better lives from the prudential perspective.

A copy of the article is here.

Friday, August 11, 2017

The real problem (of consciousness)

Anil K Seth
Originally posted November 2, 2016

Here is an excerpt:

The classical view of perception is that the brain processes sensory information in a bottom-up or ‘outside-in’ direction: sensory signals enter through receptors (for example, the retina) and then progress deeper into the brain, with each stage recruiting increasingly sophisticated and abstract processing. In this view, the perceptual ‘heavy-lifting’ is done by these bottom-up connections. The Helmholtzian view inverts this framework, proposing that signals flowing into the brain from the outside world convey only prediction errors – the differences between what the brain expects and what it receives. Perceptual content is carried by perceptual predictions flowing in the opposite (top-down) direction, from deep inside the brain out towards the sensory surfaces. Perception involves the minimisation of prediction error simultaneously across many levels of processing within the brain’s sensory systems, by continuously updating the brain’s predictions. In this view, which is often called ‘predictive coding’ or ‘predictive processing’, perception is a controlled hallucination, in which the brain’s hypotheses are continually reined in by sensory signals arriving from the world and the body. ‘A fantasy that coincides with reality,’ as the psychologist Chris Frith eloquently put it in Making Up the Mind (2007).

Armed with this theory of perception, we can return to consciousness. Now, instead of asking which brain regions correlate with conscious (versus unconscious) perception, we can ask: which aspects of predictive perception go along with consciousness? A number of experiments are now indicating that consciousness depends more on perceptual predictions, than on prediction errors. In 2001, Alvaro Pascual-Leone and Vincent Walsh at Harvard Medical School asked people to report the perceived direction of movement of clouds of drifting dots (so-called ‘random dot kinematograms’). They used TMS to specifically interrupt top-down signalling across the visual cortex, and they found that this abolished conscious perception of the motion, even though bottom-up signals were left intact.

The article is here.

Tuesday, June 27, 2017

Resisting Temptation for the Good of the Group: Binding Moral Values and the Moralization of Self-Control

Mooijman, Marlon; Meindl, Peter; Oyserman, Daphna; Monterosso, John; Dehghani, Morteza; Doris, John M.; Graham, Jesse
Journal of Personality and Social Psychology, Jun 12 , 2017.


When do people see self-control as a moral issue? We hypothesize that the group-focused “binding” moral values of Loyalty/betrayal, Authority/subversion, and Purity/degradation play a particularly important role in this moralization process. Nine studies provide support for this prediction. First, moralization of self-control goals (e.g., losing weight, saving money) is more strongly associated with endorsing binding moral values than with endorsing individualizing moral values (Care/harm, Fairness/cheating). Second, binding moral values mediate the effect of other group-focused predictors of self-control moralization, including conservatism, religiosity, and collectivism. Third, guiding participants to consider morality as centrally about binding moral values increases moralization of self-control more than guiding participants to consider morality as centrally about individualizing moral values. Fourth, we replicate our core finding that moralization of self-control is associated with binding moral values across studies differing in measures and design—whether we measure the relationship between moral and self-control language across time, the perceived moral relevance of self-control behaviors, or the moral condemnation of self-control failures. Taken together, our findings suggest that self-control moralization is primarily group-oriented and is sensitive to group-oriented cues.

The article is here.

Tuesday, November 15, 2016

Scientists “Switch Off” Self-Control Using Brain Stimulation

By Catherine Caruso
Scientific American
Originally published on October 19, 2016

Imagine you are faced with the classic thought experiment dilemma: You can take a pile of money now or wait and get an even bigger stash of cash later on. Which option do you choose? Your level of self-control, researchers have found, may have to do with a region of the brain that lets us take the perspective of others—including that of our future self.

A study, published today in Science Advances, found that when scientists used noninvasive brain stimulation to disrupt a brain region called the temporoparietal junction (TPJ), people appeared less able to see things from the point of view of their future selves or of another person, and consequently were less likely to share money with others and more inclined to opt for immediate cash instead of waiting for a larger bounty at a later date.

The TPJ, which is located where the temporal and parietal lobes meet, plays an important role in social functioning, particularly in our ability to understand situations from the perspectives of other people. However, according to Alexander Soutschek, an economist at the University of Zurich and lead author on the study, previous research on self-control and delayed gratification has focused instead on the prefrontal brain regions involved in impulse control.

The article is here.

Sunday, July 31, 2016

Neural mechanisms underlying the impact of daylong cognitive work on economic decisions

Bastien Blain, Guillaume Hollard, and Mathias Pessiglione
PNAS 2016 113 (25) 6967-6972


The ability to exert self-control is key to social insertion and professional success. An influential literature in psychology has developed the theory that self-control relies on a limited common resource, so that fatigue effects might carry over from one task to the next. However, the biological nature of the putative limited resource and the existence of carry-over effects have been matters of considerable controversy. Here, we targeted the activity of the lateral prefrontal cortex (LPFC) as a common substrate for cognitive control, and we prolonged the time scale of fatigue induction by an order of magnitude. Participants performed executive control tasks known to recruit the LPFC (working memory and task-switching) over more than 6 h (an approximate workday). Fatigue effects were probed regularly by measuring impulsivity in intertemporal choices, i.e., the propensity to favor immediate rewards, which has been found to increase under LPFC inhibition. Behavioral data showed that choice impulsivity increased in a group of participants who performed hard versions of executive tasks but not in control groups who performed easy versions or enjoyed some leisure time. Functional MRI data acquired at the start, middle, and end of the day confirmed that enhancement of choice impulsivity was related to a specific decrease in the activity of an LPFC region (in the left middle frontal gyrus) that was recruited by both executive and choice tasks. Our findings demonstrate a concept of focused neural fatigue that might be naturally induced in real-life situations and have important repercussions on economic decisions.


In evolved species, resisting the temptation of immediate rewards is a critical ability for the achievement of long-term goals. This self-control ability was found to rely on the lateral prefrontal cortex (LPFC), which also is involved in executive control processes such as working memory or task switching. Here we show that self-control capacity can be altered in healthy humans at the time scale of a workday, by performing difficult executive control tasks. This fatigue effect manifested in choice impulsivity was linked to reduced excitability of the LPFC following its intensive utilization over the day. Our findings might have implications for designing management strategies that would prevent daylong cognitive work from biasing economic decisions.

The research is here.

Sunday, May 15, 2016

Legal Insanity and Executive Function

Katrina Sifferd, William Hirstein, and Tyler Fagan
Under review to be included in The Insanity Defense: Multidisciplinary Views on Its History, Trends, and Controversies (Mark D. White, Ed.) Praeger (expected Nov. 2016)

1. The cognitive capacities relevant to legal insanity

Legal insanity is a legal concept rather than a medical one. This may seem an obvious point, but it is worth reflecting on the divergent purposes and motivations for legal, as opposed to medical, concepts. Medical categories of disease are shaped by the medical professions’ aims of understanding, diagnosing, and treating illness. Categories of legal excuse, on the other hand, serve the aims of determining criminal guilt and punishment.

A theory of legal responsibility and its criteria should exhibit symmetry between the capacities it posits as necessary for moral, and more specifically, legal agency, and the capacities that, when dysfunctional or compromised, qualify a defendant for an excuse. To put this point more strongly, the capacities necessary for legal agency should necessarily disqualify one from legal culpability when sufficiently compromised. Thus one’s view of legal insanity ought to reflect whatever one thinks are the overall purposes of the criminal law.  If the purpose of criminal punishment is social order, then legal agency entails the capacity to be law-abiding such that one does not undermine the social order. If the purpose is institutionalized moral blame for wrongful acts, then legal agency entails the capacities for moral agency. If a criminal code embraces a hybrid theory of criminal law, then all of these capacities are relevant to legal agency.

In this chapter we will argue that the capacities necessary to moral and legal agency can be understood as executive functions in the brain.

The chapter is here.