Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Neuroethics. Show all posts
Showing posts with label Neuroethics. Show all posts

Wednesday, February 17, 2021

Distributed Cognition and Distributed Morality: Agency, Artifacts and Systems

Heersmink, R. 
Sci Eng Ethics 23, 431–448 (2017). 
https://doi.org/10.1007/s11948-016-9802-1

Abstract

There are various philosophical approaches and theories describing the intimate relation people have to artifacts. In this paper, I explore the relation between two such theories, namely distributed cognition and distributed morality theory. I point out a number of similarities and differences in these views regarding the ontological status they attribute to artifacts and the larger systems they are part of. Having evaluated and compared these views, I continue by focussing on the way cognitive artifacts are used in moral practice. I specifically conceptualise how such artifacts (a) scaffold and extend moral reasoning and decision-making processes, (b) have a certain moral status which is contingent on their cognitive status, and (c) whether responsibility can be attributed to distributed systems. This paper is primarily written for those interested in the intersection of cognitive and moral theory as it relates to artifacts, but also for those independently interested in philosophical debates in extended and distributed cognition and ethics of (cognitive) technology.

Discussion

Both Floridi and Verbeek argue that moral actions, either positive or negative, can be the result of interactions between humans and technology, giving artifacts a much more prominent role in ethical theory than most philosophers have. They both develop a non-anthropocentric systems approach to morality. Floridi focuses on large-scale ‘‘multiagent systems’’, whereas Verbeek focuses on small-scale ‘‘human–technology associations’’. But both attribute morality or moral agency to systems comprising of humans and technological artifacts. On their views, moral agency is thus a system property and not found exclusively in human agents. Does this mean that the artifacts and software programs involved in the process have moral agency? Neither of them attribute moral agency to the artifactual components of the larger system. It is not inconsistent to say that the human-artifact system has moral agency without saying that its artifactual components have moral agency.  Systems often have different properties than their components. The difference between Floridi and Verbeek’s approach roughly mirrors the difference between distributed and extended cognition, in that Floridi and distributed cognition theory focus on large-scale systems without central controllers, whereas Verbeek and extended cognition theory focus on small-scale systems in which agents interact with and control an informational artifact. In Floridi’s example, the technology seems semi-autonomous: the software and computer systems automatically do what they are designed to do. Presumably, the money is automatically transferred to Oxfam, implying that technology is a mere cog in a larger socio-technical system that realises positive moral outcomes. There seems to be no central controller in this system: it is therefore difficult to see it as an extended agency whose intentions are being realised.

Thursday, January 17, 2019

Neuroethics Guiding Principles for the NIH BRAIN Initiative

Henry T. Greely, Christine Grady, Khara M. Ramos, Winston Chiong and others
Journal of Neuroscience 12 December 2018, 38 (50) 10586-10588
DOI: https://doi.org/10.1523/JNEUROSCI.2077-18.2018

Introduction

Neuroscience presents important neuroethical considerations. Human neuroscience demands focused application of the core research ethics guidelines set out in documents such as the Belmont Report. Various mechanisms, including institutional review boards (IRBs), privacy rules, and the Food and Drug Administration, regulate many aspects of neuroscience research and many articles, books, workshops, and conferences address neuroethics. (Farah, 2010; Link; Link). However, responsible neuroscience research requires continual dialogue among neuroscience researchers, ethicists, philosophers, lawyers, and other stakeholders to help assess its ethical, legal, and societal implications. The Neuroethics Working Group of the National Institutes of Health (NIH) Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative, a group of experts providing neuroethics input to the NIH BRAIN Initiative Multi-Council Working Group, seeks to promote this dialogue by proposing the following Neuroethics Guiding Principles (Table 1).

Wednesday, July 11, 2018

Could Moral Enhancement Interventions be Medically Indicated?

Sarah Carter
Health Care Analysis
December 2017, Volume 25, Issue 4, pp 338–353

Abstract

This paper explores the position that moral enhancement interventions could be medically indicated (and so considered therapeutic) in cases where they provide a remedy for a lack of empathy, when such a deficit is considered pathological. In order to argue this claim, the question as to whether a deficit of empathy could be considered to be pathological is examined, taking into account the difficulty of defining illness and disorder generally, and especially in the case of mental health. Following this, Psychopathy and a fictionalised mental disorder (Moral Deficiency Disorder) are explored with a view to consider moral enhancement techniques as possible treatments for both conditions. At this juncture, having asserted and defended the position that moral enhancement interventions could, under certain circumstances, be considered medically indicated, this paper then goes on to briefly explore some of the consequences of this assertion. First, it is acknowledged that this broadening of diagnostic criteria in light of new interventions could fall foul of claims of medicalisation. It is then briefly noted that considering moral enhancement technologies to be akin to therapies in certain circumstances could lead to ethical and legal consequences and questions, such as those regarding regulation, access, and even consent.

The paper is here.

Thursday, January 25, 2018

Neurotechnology, Elon Musk and the goal of human enhancement

Sarah Marsh
The Guardian
Originally published January 1, 2018

Here is an excerpt:

“I hope more resources will be put into supporting this very promising area of research. Brain Computer Interfaces (BCIs) are not only an invaluable tool for people with disabilities, but they could be a fundamental tool for going beyond human limits, hence improving everyone’s life.”

He notes, however, that one of the biggest challenges with this technology is that first we need to better understand how the human brain works before deciding where and how to apply BCI. “This is why many agencies have been investing in basic neuroscience research – for example, the Brain initiative in the US and the Human Brain Project in the EU.”

Whenever there is talk of enhancing humans, moral questions remain – particularly around where the human ends and the machine begins. “In my opinion, one way to overcome these ethical concerns is to let humans decide whether they want to use a BCI to augment their capabilities,” Valeriani says.

“Neuroethicists are working to give advice to policymakers about what should be regulated. I am quite confident that, in the future, we will be more open to the possibility of using BCIs if such systems provide a clear and tangible advantage to our lives.”

The article is here.

Wednesday, March 29, 2017

Neuroethics and the Ethical Parity Principle

DeMarco, J.P. & Ford, P.J.
Neuroethics (2014) 7: 317.
doi:10.1007/s12152-014-9211-6

Abstract

Neil Levy offers the most prominent moral principles that are specifically and exclusively designed to apply to neuroethics. His two closely related principles, labeled as versions of the ethical parity principle (EPP), are intended to resolve moral concerns about neurological modification and enhancement [1]. Though EPP is appealing and potentially illuminating, we reject the first version and substantially modify the second. Since his first principle, called EPP (strong), is dependent on the contention that the mind literally extends into external props such as paper notebooks and electronic devices, we begin with an examination of the extended mind hypothesis (EMH) and its use in Levy’s EPP (strong). We argue against reliance on EMH as support for EPP (strong). We turn to his second principle, EPP (weak), which is not dependent on EMH but is tied to the acceptable claim that the mind is embedded in, because dependent on, external props. As a result of our critique of EPP (weak), we develop a modified version of EPP (weak), which we argue is more acceptable than Levy’s principle. Finally, we evaluate the applicability of our version of EPP (weak).

The article is here.

Saturday, February 27, 2016

Neuroethics

Roskies, Adina, "Neuroethics", The Stanford Encyclopedia of Philosophy (Spring 2016 Edition), Edward N. Zalta (ed.), forthcoming

Neuroethics is an interdisciplinary research area that focuses on ethical issues raised by our increased and constantly improving understanding of the brain and our ability to monitor and influence it, as well as on ethical issues that emerge from our concomitant deepening understanding of the biological bases of agency and ethical decision-making.

1. The rise and scope of neuroethics

Neuroethics focuses on ethical issues raised by our continually improving understanding of the brain, and by consequent improvements in our ability to monitor and influence brain function. Significant attention to neuroethics can be traced to 2002, when the Dana Foundation organized a meeting of neuroscientists, ethicists, and other thinkers, entitled Neuroethics: Mapping the Field. A participant at that meeting, columnist and wordsmith William Safire, is often credited with introducing and establishing the meaning of the term “neuroethics”, defining it as
the examination of what is right and wrong, good and bad about the treatment of, perfection of, or unwelcome invasion of and worrisome manipulation of the human brain. (Marcus 2002: 5)
Others contend that the word “neuroethics” was in use prior to this (Illes 2003; Racine 2010), although all agree that these earlier uses did not employ it in a disciplinary sense, or to refer to the entirety of the ethical issues raised by neuroscience.

The entire entry is here.

Tuesday, January 5, 2016

Neuroethics

Richard Marshall interviews Kathinka Evers
3:AM Magazine
Originally published December 20, 2015

Here is an excerpt:

So far, researchers in neuroethics have focused mainly on the ethics of neuroscience, or applied neuroethics, such as ethical issues involved in neuroimaging techniques, cognitive enhancement, or neuropharmacology. Another important, though as yet less prevalent, scientific approach that I refer to as fundamental neuroethics questions how knowledge of the brain’s functional architecture and its evolution can deepen our understanding of personal identity, consciousness and intentionality, including the development of moral thought and judgment. Fundamental neuroethics should provide adequate theoretical foundations required in order properly to address problems of applications.

The initial question for fundamental neuroethics to answer is: how can natural science deepen our understanding of moral thought? Indeed, is the former at all relevant for the latter? One can see this as a sub-question of the question whether human consciousness can be understood in biological terms, moral thought being a subset of thought in general. That is certainly not a new query, but a version of the classical mind-body problem that has been discussed for millennia and in quite modern terms from the French Enlightenment and onwards. What is comparatively new is the realisation of the extent to which ancient philosophical problems emerge in the rapidly advancing neurosciences, such as whether or not the human species as such possesses a free will, what it means to have personal responsibility, to be a self, the relations between emotions and cognition, or between emotions and memory.

The interview is here.

Friday, February 20, 2015

Cognitive enhancement kept within contexts: neuroethics and informed public policy

By John R. Shook, Lucia Galvagni, and James Giordano
Front Syst Neurosci. 2014; 8: 228.
Published online Dec 5, 2014. doi:  10.3389/fnsys.2014.00228

Abstract

Neurothics has far greater responsibilities than merely noting potential human enhancements arriving from novel brain-centered biotechnologies and tracking their implications for ethics and civic life. Neuroethics must utilize the best cognitive and neuroscientific knowledge to shape incisive discussions about what could possibly count as enhancement in the first place, and what should count as genuinely “cognitive” enhancement. Where cognitive processing and the mental life is concerned, the lived context of psychological performance is paramount. Starting with an enhancement to the mental abilities of an individual, only performances on real-world exercises can determine what has actually been cognitively improved. And what can concretely counts as some specific sort of cognitive improvement is largely determined by the classificatory frameworks of cultures, not brain scans or laboratory experiments. Additionally, where the public must ultimately evaluate and judge the worthiness of individual performance enhancements, we mustn’t presume that public approval towards enhancers will somehow automatically arrive without due regard to civic ideals such as the common good or social justice. In the absence of any nuanced appreciation for the control which performance contexts and public contexts exert over what “cognitive” enhancements could actually be, enthusiastic promoters of cognitive enhancement can all too easily depict safe and effective brain modifications as surely good for us and for society. These enthusiasts are not unaware of oft-heard observations about serious hurdles for reliable enhancement from neurophysiological modifications. Yet those observations are far more common than penetrating investigations into the implications to those hurdles for a sound public understanding of cognitive enhancement, and a wise policy review over cognitive enhancement. We offer some crucial recommendations for undertaking such investigations, so that cognitive enhancers that truly deserve public approval can be better identified.

The entire article is here.