Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Neuroscience. Show all posts
Showing posts with label Neuroscience. Show all posts

Wednesday, March 8, 2023

Neuroscience is ready for neuroethics engagement

Das, J., Forlini, C., Porcello, D. M. et al.
Front. Commun., 21 December 2022
Sec. Science and Environmental Communication

Neuroscience research has been expanding, providing new insights into brain and nervous system function and potentially transformative technological applications. In recent years, there has been a flurry of prominent international scientific academies and intergovernmental organizations calling for engagement with different publics on social, ethical, and regulatory issues related to neuroscience and neurotechnology advances. Neuroscientific activities and outputs are value-laden; they reflect the cultural, ethical, and political values that are prioritized in different societies at a given time and impact a variety of publics beyond the laboratory. The focus on engagement in neuroscience recognizes the breadth and significance of current neuroscience research whilst acknowledging the need for a neuroethical approach that explores the epistemic and moral values influencing the neuroscientific agenda. The field of neuroethics is characterized by its focus on the social, legal, and philosophical implications of neuroscience including its impact on cultural assumptions about the cognitive experience, identity, consciousness, and decision-making. Here, we outline a proposal for neuroethics engagement that reflects an enhanced and evolving understanding of public engagement with neuroethical issues to create opportunities to share ideation, decision-making, and collaboration in neuroscience endeavors for the benefit of society. We demonstrate the synergies between public engagement and neuroethics scholarship and activities that can guide neuroethics engagement.

Conclusion

Building on research from numerous fields and experiences of the past, engagement between neuroscience, neuroethics, and publics offers a critical lens for anticipating and interrogating the unique societal implications of neuroscience discovery and dissemination, and it can help guide regulation so that neuroscience products promote societal well-being. Engagement offers a bridge not only for neuroscientists and neuroethicists, but also for neuroethics and the public. It is possible that more widespread use of neuroethics engagement will reveal yet unknown or overlooked ethical conflicts in neuroscience that may take priority over the ones listed here.

We offer this paper as part of a continued and expanded dialogue on neuroethics engagement. The concept we propose will require the input of stakeholders beyond neuroethics, neuroscience, and public engagement in science to build practices that are inclusive and fit for purpose. Effective neuroethics engagement should be locally and temporally informed, lead to a culturally situated understanding of science and diplomacy, aim to understand the transnational nature of scientific knowledge, and be mindful of the challenges raised by how knowledge of discoveries circulates.

Friday, August 5, 2022

The Neuroscience Behind Bad Decisions

Emily Singer
Quanta Magazine
Originally posted 13 AUG 16

Here are excerpts:

Economists have spent more than 50 years cataloging irrational choices like these. Nobel Prizes have been earned; millions of copies of Freakonomics have been sold. But economists still aren’t sure why they happen. “There had been a real cottage industry in how to explain them and lots of attempts to make them go away,” said Eric Johnson, a psychologist and co-director of the Center for Decision Sciences at Columbia University. But none of the half-dozen or so explanations are clear winners, he said.

In the last 15 to 20 years [this article was written in 2016], neuroscientists have begun to peer directly into the brain in search of answers. “Knowing something about how information is represented in the brain and the computational principles of the brain helps you understand why people make decisions how they do,” said Angela Yu, a theoretical neuroscientist at the University of California, San Diego.

Glimcher is using both the brain and behavior to try to explain our irrationality. He has combined results from studies like the candy bar experiment with neuroscience data — measurements of electrical activity in the brains of animals as they make decisions — to develop a theory of how we make decisions and why that can lead to mistakes.

(cut)

But the decision-making system operates under more complex constraints and has to consider many different types of information. For example, a person might choose which house to buy depending on its location, size or style. But the relative importance of each of these factors, as well as their optimal value — city or suburbs, Victorian or modern — is fundamentally subjective. It varies from person to person and may even change for an individual depending on their stage of life. “There is not one simple, easy-to-measure mathematical quantity like redundancy that decision scientists universally agree on as being a key factor in the comparison of competing alternatives,” Yu said.

She suggests that uncertainty in how we value different options is behind some of our poor decisions. “If you’ve bought a lot of houses, you’ll evaluate houses differently than if you were a first-time homebuyer,” Yu said. “Or if your parents bought a house during the housing crisis, it may later affect how you buy a house.”

Moreover, Yu argues, the visual and decision-making systems have different end-goals. “Vision is a sensory system whose job is to recover as much information as possible from the world,” she said. “Decision-making is about trying to make a decision you’ll enjoy. I think the computational goal is not just information, it’s something more behaviorally relevant like total enjoyment.”

For many of us, the main concern over decision-making is practical — how can we make better decisions? Glimcher said that his research has helped him develop specific strategies. “Rather than pick what I hope is the best, instead I now always start by eliminating the worst element from a choice set,” he said, reducing the number of options to something manageable, like three.


Curator's note: Oddly enough, this last sentence is what personalized algorithms do.  Pushing people to limited options has both positive and negative aspects.  While it may help with decision-making, it also helps with political polarization.

Tuesday, May 10, 2022

Consciousness Semanticism: A Precise Eliminativist Theory of Consciousness

Anthis, J.R. (2022). 
In: Klimov, V.V., Kelley, D.J. (eds) Biologically 
Inspired Cognitive Architectures 2021. BICA 2021. 
Studies in Computational Intelligence, vol 1032. 
Springer, Cham. 
https://doi.org/10.1007/978-3-030-96993-6_3

Abstract

Many philosophers and scientists claim that there is a ‘hard problem of consciousness’, that qualia, phenomenology, or subjective experience cannot be fully understood with reductive methods of neuroscience and psychology, and that there is a fact of the matter as to ‘what it is like’ to be conscious and which entities are conscious. Eliminativism and related views such as illusionism argue against this. They claim that consciousness does not exist in the ways implied by everyday or scholarly language. However, this debate has largely consisted of each side jousting analogies and intuitions against the other. Both sides remain unconvinced. To break through this impasse, I present consciousness semanticism, a novel eliminativist theory that sidesteps analogy and intuition. Instead, it is based on a direct, formal argument drawing from the tension between the vague semantics in definitions of consciousness such as ‘what it is like’ to be an entity and the precise meaning implied by questions such as, ‘Is this entity conscious?’ I argue that semanticism naturally extends to erode realist notions of other philosophical concepts, such as morality and free will. Formal argumentation from precise semantics exposes these as pseudo-problems and eliminates their apparent mysteriousness and intractability.

From Implications and Concluding Remarks

Perhaps even more importantly, humanity seems to be rapidly developing the capacity to create vastly more intelligent beings than currently exist. Scientists and engineers have already built artificial intelligences from chess bots to sex bots.  Some projects are already aimed at the organic creation of intelligence, growing increasingly large sections of human brains in the laboratory. Such minds could have something we want to call consciousness, and they could exist in astronomically large numbers. Consider if creating a new conscious being becomes as easy as copying and pasting a computer program or building a new robot in a factory. How will we determine when these creations become conscious or sentient?  When do they deserve legal protection or rights? These are important motivators for the study of consciousness, particularly for the attempt to escape the intellectual quagmire that may have grown from notions such as the ‘hard problem’ and ‘problem of other minds’. Andreotta (2020) argues that the project of ‘AI rights’,  including artificial intelligences in the moral circle, is ‘beset by an epistemic problem that threatens to impede its progress—namely, a lack of a solution to the “Hard Problem” of consciousness’. While the extent of the impediment is unclear, a resolution of the ‘hard problem’ such as the one I have presented could make it easier to extend moral concern to artificial intelligences.

Thursday, April 7, 2022

How to Prevent Robotic Sociopaths: A Neuroscience Approach to Artificial Ethics

Christov-Moore, L., Reggente, N.,  et al.
https://doi.org/10.31234/osf.io/6tn42

Abstract

Artificial intelligence (AI) is expanding into every niche of human life, organizing our activity, expanding our agency and interacting with us to an increasing extent. At the same time, AI’s efficiency, complexity and refinement are growing quickly. Justifiably, there is increasing concern with the immediate problem of engineering AI that is aligned with human interests.

Computational approaches to the alignment problem attempt to design AI systems to parameterize human values like harm and flourishing, and avoid overly drastic solutions, even if these are seemingly optimal. In parallel, ongoing work in service AI (caregiving, consumer care, etc.) is concerned with developing artificial empathy, teaching AI’s to decode human feelings and behavior, and evince appropriate, empathetic responses. This could be equated to cognitive empathy in humans.

We propose that in the absence of affective empathy (which allows us to share in the states of others), existing approaches to artificial empathy may fail to produce the caring, prosocial component of empathy, potentially resulting in superintelligent, sociopath-like AI. We adopt the colloquial usage of “sociopath” to signify an intelligence possessing cognitive empathy (i.e., the ability to infer and model the internal states of others), but crucially lacking harm aversion and empathic concern arising from vulnerability, embodiment, and affective empathy (which permits for shared experience). An expanding, ubiquitous intelligence that does not have a means to care about us poses a species-level risk.

It is widely acknowledged that harm aversion is a foundation of moral behavior. However, harm aversion is itself predicated on the experience of harm, within the context of the preservation of physical integrity. Following from this, we argue that a “top-down” rule-based approach to achieving caring, aligned AI may be unable to anticipate and adapt to the inevitable novel moral/logistical dilemmas faced by an expanding AI. It may be more effective to cultivate prosociality from the bottom up, baked into an embodied, vulnerable artificial intelligence with an incentive to preserve its real or simulated physical integrity. This may be achieved via optimization for incentives and contingencies inspired by the development of empathic concern in vivo. We outline the broad prerequisites of this approach and review ongoing work that is consistent with our rationale.

If successful, work of this kind could allow for AI that surpasses empathic fatigue and the idiosyncrasies, biases, and computational limits of human empathy. The scaleable complexity of AI may allow it unprecedented capability to deal proportionately and compassionately with complex, large-scale ethical dilemmas. By addressing this problem seriously in the early stages of AI’s integration with society, we might eventually produce an AI that plans and behaves with an ingrained regard for the welfare of others, aided by the scalable cognitive complexity necessary to model and solve extraordinary problems.

Monday, March 29, 2021

The problem with prediction

Joseph Fridman
aeon.com
Originally published 25 Jan 21

Here is an excerpt:

Today, many neuroscientists exploring the predictive brain deploy contemporary economics as a similar sort of explanatory heuristic. Scientists have come a long way in understanding how ‘spending metabolic money to build complex brains pays dividends in the search for adaptive success’, remarks the philosopher Andy Clark, in a notable review of the predictive brain. The idea of the predictive brain makes sense because it is profitable, metabolically speaking. Similarly, the psychologist Lisa Feldman Barrett describes the primary role of the predictive brain as managing a ‘body budget’. In this view, she says, ‘your brain is kind of like the financial sector of a company’, predictively allocating resources, spending energy, speculating, and seeking returns on its investments. For Barrett and her colleagues, stress is like a ‘deficit’ or ‘withdrawal’ from the body budget, while depression is bankruptcy. In Blackmore’s day, the brain was made up of sentries and soldiers, whose collective melancholy became the sadness of the human being they inhabited. Today, instead of soldiers, we imagine the brain as composed of predictive statisticians, whose errors become our neuroses. As the neuroscientist Karl Friston said: ‘[I]f the brain is an inference machine, an organ of statistics, then when it goes wrong, it’ll make the same sorts of mistakes a statistician will make.’

The strength of this association between predictive economics and brain sciences matters, because – if we aren’t careful – it can encourage us to reduce our fellow humans to mere pieces of machinery. Our brains were never computer processors, as useful as it might have been to imagine them that way every now and then. Nor are they literally prediction engines now and, should it come to pass, they will not be quantum computers. Our bodies aren’t empires that shuttle around sentrymen, nor are they corporations that need to make good on their investments. We aren’t fundamentally consumers to be tricked, enemies to be tracked, or subjects to be predicted and controlled. Whether the arena be scientific research or corporate intelligence, it becomes all too easy for us to slip into adversarial and exploitative framings of the human; as Galison wrote, ‘the associations of cybernetics (and the cyborg) with weapons, oppositional tactics, and the black-box conception of human nature do not so simply melt away.’

Tuesday, February 16, 2021

Strategic Regulation of Empathy

Weisz, E., & Cikara, M. 
(2020, October 9).

Abstract

Empathy is an integral part of socio-emotional well-being, yet recent research has highlighted some of its downsides. Here we examine literature that establishes when, how much, and what aspects of empathy promote specific outcomes. After reviewing a theoretical framework which characterizes empathy as a suite of separable components, we examine evidence showing how dissociations of these components affect important socio-emotional outcomes and describe emerging evidence suggesting that these components can be independently and deliberately modulated. Finally, we advocate for a new approach to a multi-component view of empathy which accounts for the interrelations among components. This perspective advances scientific conceptualization of empathy and offers suggestions for tailoring empathy to help people realize their social, emotional, and occupational goals.

From Concluding Remarks

Early research on empathy regarded it as a monolithic construct. This characterization ultimately gave rise to a second wave of empathy-related research, which explicitly examined dissociations among empathy-related components.Subsequently, researchers noticed that individual components held different predictive power over key outcomes such as helping and occupational burnout. As described above, however, there are many instances in which these components track together in the real world, suggesting that although they can dissociate, they often operate in tandem.

Because empathy-related components rely on separable neural systems, the field of social neuroscience has already made significant progress toward the goal of characterizing instances when components do (or do not) track together.  For example, although affective and cognitive channels can independently contribute to judgments of others emotional states, they also operate in synchrony during more naturalistic socio-emotional tasks.  However, far more behavioral research is needed to characterize the co-occurrence of components in people’s everyday social interactions.  Because people differ in their tendencies to engage distinct components of empathy, a better understanding of the separability and interrelations of these components in real-world social scenarios can help tailor empathy-training programs to promote desirable outcomes.  Empathy-training efforts are on average effective (Hedges’ g = 0.51) but generally intervene on empathy as a whole (rather than specific components). 

Friday, February 21, 2020

Friends or foes: Is empathy necessary for moral behavior?

Jean Decety and Jason M. Cowell
Perspect Psychol Sci. 2014 Sep; 9(4): 525–537.
doi: 10.1177/1745691614545130

Abstract

The past decade has witnessed a flurry of empirical and theoretical research on morality and empathy, as well as increased interest and usage in the media and the public arena. At times, in both popular and academia, morality and empathy are used interchangeably, and quite often the latter is considered to play a foundational role for the former. In this article, we argue that, while there is a relationship between morality and empathy, it is not as straightforward as apparent at first glance. Moreover, it is critical to distinguish between the different facets of empathy (emotional sharing, empathic concern, and perspective taking), as each uniquely influences moral cognition and predicts differential outcomes in moral behavior. Empirical evidence and theories from evolutionary biology, developmental, behavioral, and affective and social neuroscience are comprehensively integrated in support of this argument. The wealth of findings illustrates a complex and equivocal relationship between morality and empathy. The key to understanding such relations is to be more precise on the concepts being used, and perhaps abandoning the muddy concept of empathy.

From the Conclusion:

To wrap up on a provocative note, it may be advantageous for the science of morality, in the future, to refrain from using the catch-all term of empathy, which applies to a myriad of processes and phenomena, and as a result yields confusion in both understanding and predictive ability. In both academic and applied domains such medicine, ethics, law and policy, empathy has become an enticing, but muddy notion, potentially leading to misinterpretation. If ancient Greek philosophy has taught us anything, it is that when a concept is attributed with so many meanings, it is at risk for losing function.

The article is here.

Monday, December 2, 2019

Neuroscientific evidence in the courtroom: a review.

Image result for neuroscience evidence in the courtroom"Aono, D., Yaffe, G. & Kober, H.
Cogn. Research 4, 40 (2019)
doi:10.1186/s41235-019-0179-y

Abstract

The use of neuroscience in the courtroom can be traced back to the early twentieth century. However, the use of neuroscientific evidence in criminal proceedings has increased significantly over the last two decades. This rapid increase has raised questions, among the media as well as the legal and scientific communities, regarding the effects that such evidence could have on legal decision makers. In this article, we first outline the history of neuroscientific evidence in courtrooms and then we provide a review of recent research investigating the effects of neuroscientific evidence on decision-making broadly, and on legal decisions specifically. In the latter case, we review studies that measure the effect of neuroscientific evidence (both imaging and nonimaging) on verdicts, sentencing recommendations, and beliefs of mock jurors and judges presented with a criminal case. Overall, the reviewed studies suggest mitigating effects of neuroscientific evidence on some legal decisions (e.g., the death penalty). Furthermore, factors such as mental disorder diagnoses and perceived dangerousness might moderate the mitigating effect of such evidence. Importantly, neuroscientific evidence that includes images of the brain does not appear to have an especially persuasive effect (compared with other neuroscientific evidence that does not include an image). Future directions for research are discussed, with a specific call for studies that vary defendant characteristics, the nature of the crime, and a juror’s perception of the defendant, in order to better understand the roles of moderating factors and cognitive mediators of persuasion.

Significance

The increased use of neuroscientific evidence in criminal proceedings has led some to wonder what effects such evidence has on legal decision makers (e.g., jurors and judges) who may be unfamiliar with neuroscience. There is some concern that legal decision makers may be unduly influenced by testimony and images related to the defendant’s brain. This paper briefly reviews the history of neuroscientific evidence in the courtroom to provide context for its current use. It then reviews the current research examining the influence of neuroscientific evidence on legal decision makers and potential moderators of such effects. Our synthesis of the findings suggests that neuroscientific evidence has some mitigating effects on legal decisions, although neuroimaging-based evidence does not hold any special persuasive power. With this in mind, we provide recommendations for future research in this area. Our review and conclusions have implications for scientists, legal scholars, judges, and jurors, who could all benefit from understanding the influence of neuroscientific evidence on judgments in criminal cases.

Wednesday, October 2, 2019

Seven Key Misconceptions about Evolutionary Psychology

Image result for evolutionary psychologyLaith Al-Shawaf
www.areomagazine.com
Originally published August 20, 2019

Evolutionary approaches to psychology hold the promise of revolutionizing the field and unifying it with the biological sciences. But among both academics and the general public, a few key misconceptions impede its application to psychology and behavior. This essay tackles the most pervasive of these.

Misconception 1: Evolution and Learning Are Conflicting Explanations for Behavior

People often assume that if something is learned, it’s not evolved, and vice versa. This is a misleading way of conceptualizing the issue, for three key reasons.

First, many evolutionary hypotheses are about learning. For example, the claim that humans have an evolved fear of snakes and spiders does not mean that people are born with this fear. Instead, it means that humans are endowed with an evolved learning mechanism that acquires a fear of snakes more easily and readily than other fears. Classic studies in psychology show that monkeys can acquire a fear of snakes through observational learning, and they tend to acquire it more quickly than a similar fear of other objects, such as rabbits or flowers. It is also harder for monkeys to unlearn a fear of snakes than it is to unlearn other fears. As with monkeys, the hypothesis that humans have an evolved fear of snakes does not mean that we are born with this fear. Instead, it means that we learn this fear via an evolved learning mechanism that is biologically prepared to acquire some fears more easily than others.

Second, learning is made possible by evolved mechanisms instantiated in the brain. We are able to learn because we are equipped with neurocognitive mechanisms that enable learning to occur—and these neurocognitive mechanisms were built by evolution. Consider the fact that both children and puppies can learn, but if you try to teach them the same thing—French, say, or game theory—they end up learning different things. Why? Because the dog’s evolved learning mechanisms are different from those of the child. What organisms learn, and how they learn it, depends on the nature of the evolved learning mechanisms housed in their brains.

The info is here.


Tuesday, August 27, 2019

Neuroscience and mental state issues in forensic assessment

David Freedman and Simona Zaami
International Journal of Law and Psychiatry
Available online 2 April 2019

Abstract

Neuroscience has already changed how the law understands an individual's cognitive processes, how those processes shape behavior, and how bio-psychosocial history and neurodevelopmental approaches provide information, which is critical to understanding mental states underlying behavior, including criminal behavior. In this paper, we briefly review the state of forensic assessment of mental conditions in the relative culpability of criminal defendants, focused primarily on the weaknesses of current approaches. We then turn to focus on neuroscience approaches and how they have the potential to improve assessment, but with significant risks and limitations.

From the Conclusion:

This approach is not a cure-all. Understanding and explaining specific behaviors is a difficult undertaking, and explaining the mental condition of the person engaged in those behaviors at the time the behaviors took place is even more difficult. Yet, the law requires some degree of reliability and rigorous, honest presentation of the strengths and weaknesses of the science being relied upon to form opinions.  Despite the dramatic advances understanding the neural bases of cognition and functioning, neuroscience does not yet reliably describe how those processes emerge in a specific environmental context (Poldrack et al., 2018), nor what an individual was thinking, feeling, experiencing, understanding, or intending at a particular moment in time (Freedman & Woods, 2018; Greely & Farahany, 2019).

The info is here.

Wednesday, June 26, 2019

The computational and neural substrates of moral strategies in social decision-making

Jeroen M. van Baar, Luke J. Chang & Alan G. Sanfey
Nature Communications, Volume 10, Article number: 1483 (2019)

Abstract

Individuals employ different moral principles to guide their social decision-making, thus expressing a specific ‘moral strategy’. Which computations characterize different moral strategies, and how might they be instantiated in the brain? Here, we tackle these questions in the context of decisions about reciprocity using a modified Trust Game. We show that different participants spontaneously and consistently employ different moral strategies. By mapping an integrative computational model of reciprocity decisions onto brain activity using inter-subject representational similarity analysis of fMRI data, we find markedly different neural substrates for the strategies of ‘guilt aversion’ and ‘inequity aversion’, even under conditions where the two strategies produce the same choices. We also identify a new strategy, ‘moral opportunism’, in which participants adaptively switch between guilt and inequity aversion, with a corresponding switch observed in their neural activation patterns. These findings provide a valuable view into understanding how different individuals may utilize different moral principles.

(cut)

From the Discussion

We also report a new strategy observed in participants, moral opportunism. This group did not consistently apply one moral rule to their decisions, but rather appeared to make a motivational trade-off depending on the particular trial structure. This opportunistic decision strategy entailed switching between the behavioral patterns of guilt aversion and inequity aversion, and allowed participants to maximize their financial payoff while still always following a moral rule. Although it could have been the case that these opportunists merely resembled GA and IA in terms of decision outcome, and not in the underlying psychological process, a confirmatory analysis showed that the moral opportunists did in fact switch between the neural representations of guilt and inequity aversion, and thus flexibly employed the respective psychological processes underlying these two, quite different, social preferences. This further supports our interpretation that the activity patterns directly reflect guilt aversion and inequity aversion computations, and not a theoretically peripheral “third factor” shared between GA or IA participants. Additionally, we found activity patterns specifically linked to moral opportunism in the superior parietal cortex and dACC, which are strongly associated with cognitive control and working memory.

The research is here.

Thursday, June 6, 2019

A socio-historical take on the meta-problem of consciousness

Hakwan Lau and Matthias Michel
PsyArXiv Preprints
Last Edited May 21, 2019

Abstract

Whether consciousness is hard to explain depends on the notion of explanation at play. Importantly, for an explanation to be successful, it is necessary to have a correct understanding of the relevant basic empirical facts (i.e. the explanans). We review socio-historical factors that account for why, as a field, the neuroscience of consciousness has not been particularly successful at getting the basic facts right. And yet, we tend to aim for explanations of an unrealistically and unnecessarily ambitious nature. This discrepancy between ambitious notions of explanations and the relatively poor quality of explanans may account for what Chalmers calls “the meta-problem”.

The paper is here.

Saturday, May 18, 2019

The Neuroscience of Moral Judgment

Joanna Demaree-Cotton & Guy Kahane
Published in The Routledge Handbook of Moral Epistemology, eds. Karen Jones, Mark Timmons, and Aaron Zimmerman (Routledge, 2018).

Abstract:

This chapter examines the relevance of the cognitive science of morality to moral epistemology, with special focus on the issue of the reliability of moral judgments. It argues that the kind of empirical evidence of most importance to moral epistemology is at the psychological rather than neural level. The main theories and debates that have dominated the cognitive science of morality are reviewed with an eye to their epistemic significance.

1. Introduction

We routinely make moral judgments about the rightness of acts, the badness of outcomes, or people’s characters. When we form such judgments, our attention is usually fixed on the relevant situation, actual or hypothetical, not on our own minds. But our moral judgments are obviously the result of mental processes, and we often enough turn our attention to aspects of this process—to the role, for example, of our intuitions or emotions in shaping our moral views, or to the consistency of a judgment about a case with more general moral beliefs.

Philosophers have long reflected on the way our minds engage with moral questions—on the conceptual and epistemic links that hold between our moral intuitions, judgments, emotions, and motivations. This form of armchair moral psychology is still alive and well, but it’s increasingly hard to pursue it in complete isolation from the growing body of research in the cognitive science of morality (CSM). This research is not only uncovering the psychological structures that underlie moral judgment but, increasingly, also their neural underpinning—utilizing, in this connection, advances in functional neuroimaging, brain lesion studies, psychopharmacology, and even direct stimulation of the brain. Evidence from such research has been used not only to develop grand theories about moral psychology, but also to support ambitious normative arguments.

Saturday, May 11, 2019

Free Will, an Illusion? An Answer from a Pragmatic Sentimentalist Point of View

Maureen Sie
Appears in : Caruso, G. (ed.), June 2013, Exploring the Illusion of Free Will and Moral Responsibility, Rowman & Littlefield.

According to some people, diverse findings in the cognitive and neurosciences suggest that free will is an illusion: We experience ourselves as agents, but in fact our brains decide, initiate, and judge before ‘we’ do (Soon, Brass, Heinze and Haynes 2008; Libet and Gleason 1983). Others have replied that the distinction between ‘us’ and ‘our brains’ makes no sense (e.g., Dennett 2003)  or that scientists misperceive the conceptual relations that hold between free will and responsibility (Roskies 2006). Many others regard the neuro-scientific findings as irrelevant to their views on free will. They do not believe that determinist processes are incompatible with free will to begin with, hence, do not understand why deterministic processes in our brain would be (see Sie and Wouters 2008, 2010). That latter response should be understood against the background of the philosophical free will discussion. In philosophy, free will is traditionally approached as a metaphysical problem, one that needs to be dealt with in order to discuss the legitimacy of our practices of responsibility. The emergence of our moral practices is seen as a result of the assumption that we possess free will (or some capacity associated with it) and the main question discussed is whether that assumption is compatible with determinism.  In this chapter we want to steer clear from this 'metaphysical' discussion.

The question we are interested in in this chapter, is whether the above mentioned scientific findings are relevant to our use of the concept of free will when that concept is approached from a different angle. We call this different angle the 'pragmatic sentimentalist'-approach to free will (hereafter the PS-approach).  This approach can be traced back to Peter F. Strawson’s influential essay “Freedom and Resentment”(Strawson 1962).  Contrary to the metaphysical approach, the PS-approach does not understand free will as a concept that somehow precedes our moral practices. Rather it is assumed that everyday talk of free will naturally arises in a practice that is characterized by certain reactive attitudes that we take towards one another. This is why it is called 'sentimentalist.' In this approach, the practical purposes of the concept of free will are put central stage. This is why it is called 'pragmatist.'

A draft of the book chapter can be downloaded here.

Saturday, February 16, 2019

There’s No Such Thing as Free Will

Stephen Cave
The Atlantic
Originally published June 2016

Here is an excerpt:

What is new, though, is the spread of free-will skepticism beyond the laboratories and into the mainstream. The number of court cases, for example, that use evidence from neuroscience has more than doubled in the past decade—mostly in the context of defendants arguing that their brain made them do it. And many people are absorbing this message in other contexts, too, at least judging by the number of books and articles purporting to explain “your brain on” everything from music to magic. Determinism, to one degree or another, is gaining popular currency. The skeptics are in ascendance.

This development raises uncomfortable—and increasingly nontheoretical—questions: If moral responsibility depends on faith in our own agency, then as belief in determinism spreads, will we become morally irresponsible? And if we increasingly see belief in free will as a delusion, what will happen to all those institutions that are based on it?

(cut)

Determinism not only undermines blame, Smilansky argues; it also undermines praise. Imagine I do risk my life by jumping into enemy territory to perform a daring mission. Afterward, people will say that I had no choice, that my feats were merely, in Smilansky’s phrase, “an unfolding of the given,” and therefore hardly praiseworthy. And just as undermining blame would remove an obstacle to acting wickedly, so undermining praise would remove an incentive to do good. Our heroes would seem less inspiring, he argues, our achievements less noteworthy, and soon we would sink into decadence and despondency.

The info is here.

Saturday, January 19, 2019

There Is No Such Thing as Conscious Thought

Steve Ayan
Scientific American
Originally posted December 20, 2018

Here is an excerpt:

What makes you think conscious thought is an illusion?

I believe that the whole idea of conscious thought is an error. I came to this conclusion by following out the implications of the two of the main theories of consciousness. The first is what is called the Global Workspace Theory, which is associated with neuroscientists Stanislas Dehaene and Bernard Baars. Their theory states that to be considered conscious a mental state must be among the contents of working memory (the “user interface” of our minds) and thereby be available to other mental functions, such as decision-making and verbalization. Accordingly, conscious states are those that are “globally broadcast,” so to speak. The alternative view, proposed by Michael Graziano, David Rosenthal and others, holds that conscious mental states are simply those that you know of, that you are directly aware of in a way that doesn’t require you to interpret yourself. You do not have to read you own mind to know of them. Now, whichever view you adopt, it turns out that thoughts such as decisions and judgments should not be considered to be conscious. They are not accessible in working memory, nor are we directly aware of them. We merely have what I call “the illusion of immediacy”—the false impression that we know our thoughts directly.

The info is here.

Here is a link to Keith Frankish's chapter on the Illusion of Consciousness.

Thursday, January 17, 2019

Neuroethics Guiding Principles for the NIH BRAIN Initiative

Henry T. Greely, Christine Grady, Khara M. Ramos, Winston Chiong and others
Journal of Neuroscience 12 December 2018, 38 (50) 10586-10588
DOI: https://doi.org/10.1523/JNEUROSCI.2077-18.2018

Introduction

Neuroscience presents important neuroethical considerations. Human neuroscience demands focused application of the core research ethics guidelines set out in documents such as the Belmont Report. Various mechanisms, including institutional review boards (IRBs), privacy rules, and the Food and Drug Administration, regulate many aspects of neuroscience research and many articles, books, workshops, and conferences address neuroethics. (Farah, 2010; Link; Link). However, responsible neuroscience research requires continual dialogue among neuroscience researchers, ethicists, philosophers, lawyers, and other stakeholders to help assess its ethical, legal, and societal implications. The Neuroethics Working Group of the National Institutes of Health (NIH) Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative, a group of experts providing neuroethics input to the NIH BRAIN Initiative Multi-Council Working Group, seeks to promote this dialogue by proposing the following Neuroethics Guiding Principles (Table 1).

Wednesday, August 1, 2018

Why our brains see the world as ‘us’ versus ‘them’

Leslie Henderson
The Conversation
Originally posted June 2018

Here is an excerpt:

As opposed to fear, distrust and anxiety, circuits of neurons in brain regions called the mesolimbic system are critical mediators of our sense of “reward.” These neurons control the release of the transmitter dopamine, which is associated with an enhanced sense of pleasure. The addictive nature of some drugs, as well as pathological gaming and gambling, are correlated with increased dopamine in mesolimbic circuits.

In addition to dopamine itself, neurochemicals such as oxytocin can significantly alter the sense of reward and pleasure, especially in relationship to social interactions, by modulating these mesolimbic circuits.

Methodological variations indicate further study is needed to fully understand the roles of these signaling pathways in people. That caveat acknowledged, there is much we can learn from the complex social interactions of other mammals.

The neural circuits that govern social behavior and reward arose early in vertebrate evolution and are present in birds, reptiles, bony fishes and amphibians, as well as mammals. So while there is not a lot of information on reward pathway activity in people during in-group versus out-group social situations, there are some tantalizing results from  studies on other mammals.

The article is here.

Sunday, May 27, 2018

​The Ethics of Neuroscience - A Different Lens



New technologies are allowing us to have control over the human brain like never before. As we push the possibilities we must ask ourselves, what is neuroscience today and how far is too far?

The world’s best neurosurgeons can now provide treatments for things that were previously untreatable, such as Parkinson’s and clinical depression. Many patients are cured, while others develop side effects such as erratic behaviour and changes in their personality. 

Not only do we have greater understanding of clinical psychology, forensic psychology and criminal psychology, we also have more control. Professional athletes and gamers are now using this technology – some of it untested – to improve performance. However, with these amazing possibilities come great ethical concerns.

This manipulation of the brain has far-reaching effects, impacting the law, marketing, health industries and beyond. We need to investigate the capabilities of neuroscience and ask the ethical questions that will determine how far we can push the science of mind and behaviour.

Friday, May 25, 2018

What does it take to be a brain disorder?

Anneli Jefferson
Synthese (2018).
https://doi.org/10.1007/s11229-018-1784-x

Abstract

In this paper, I address the question whether mental disorders should be understood to be brain disorders and what conditions need to be met for a disorder to be rightly described as a brain disorder. I defend the view that mental disorders are autonomous and that a condition can be a mental disorder without at the same time being a brain disorder. I then show the consequences of this view. The most important of these is that brain differences underlying mental disorders derive their status as disordered from the fact that they realize mental dysfunction and are therefore non-autonomous or dependent on the level of the mental. I defend this view of brain disorders against the objection that only conditions whose pathological character can be identified independently of the mental level of description count as brain disorders. The understanding of brain disorders I propose requires a certain amount of conceptual revision and is at odds with approaches which take the notion of brain disorder to be fundamental or look to neuroscience to provide us with a purely physiological understanding of mental illness. It also entails a pluralistic understanding of psychiatric illness, according to which a condition can be both a mental disorder and a brain disorder.

The research is here.