Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Agency. Show all posts
Showing posts with label Agency. Show all posts

Thursday, January 17, 2019

Neuroethics Guiding Principles for the NIH BRAIN Initiative

Henry T. Greely, Christine Grady, Khara M. Ramos, Winston Chiong and others
Journal of Neuroscience 12 December 2018, 38 (50) 10586-10588
DOI: https://doi.org/10.1523/JNEUROSCI.2077-18.2018

Introduction

Neuroscience presents important neuroethical considerations. Human neuroscience demands focused application of the core research ethics guidelines set out in documents such as the Belmont Report. Various mechanisms, including institutional review boards (IRBs), privacy rules, and the Food and Drug Administration, regulate many aspects of neuroscience research and many articles, books, workshops, and conferences address neuroethics. (Farah, 2010; Link; Link). However, responsible neuroscience research requires continual dialogue among neuroscience researchers, ethicists, philosophers, lawyers, and other stakeholders to help assess its ethical, legal, and societal implications. The Neuroethics Working Group of the National Institutes of Health (NIH) Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative, a group of experts providing neuroethics input to the NIH BRAIN Initiative Multi-Council Working Group, seeks to promote this dialogue by proposing the following Neuroethics Guiding Principles (Table 1).

Tuesday, November 13, 2018

Delusions and Three Myths of Irrational Belief

Bortolotti L. (2018) Delusions and Three Myths of Irrational Belief.
In: Bortolotti L. (eds) Delusions in Context. Palgrave Macmillan, Cham

Abstract

This chapter addresses the contribution that the delusion literature has made to the philosophy of belief. Three conclusions will be drawn: (1) a belief does not need to be epistemically rational to be used in the interpretation of behaviour; (2) a belief does not need to be epistemically rational to have significant psychological or epistemic benefits; (3) beliefs exhibiting the features of epistemic irrationality exemplified by delusions are not infrequent, and they are not an exception in a largely rational belief system. What we learn from the delusion literature is that there are complex relationships between rationality and interpretation, rationality and success, and rationality and knowledge.

The chapter is here.

Here is a portion of the Conclusion:

Second, it is not obvious that epistemically irrational beliefs should be corrected, challenged, or regarded as a glitch in an otherwise rational belief system. The whole attitude towards such beliefs should change. We all have many epistemically irrational beliefs, and they are not always a sign that we lack credibility or we are mentally unwell. Rather, they are predictable features of human cognition (Puddifoot and Bortolotti, 2018). We are not unbiased in the way we weigh up evidence and we tend to be conservative once we have adopted a belief, making it hard for new contrary evidence to unsettle our existing convictions. Some delusions are just a vivid illustration of a general tendency that is widely shared and hard to counteract. Delusions, just like more common epistemically irrational beliefs, may be a significant obstacle to the achievements of our goals and may cause a rift between our way of seeing the world and other people’s way. That is why it is important to develop a critical attitude towards their content.

Sunday, November 4, 2018

When Tech Knows You Better Than You Know Yourself

Nicholas Thompson
www.wired.com
Originally published October 4, 2018

Here is an excerpt:

Hacking a Human

NT: Explain what it means to hack a human being and why what can be done now is different from what could be done 100 years ago.

YNH: To hack a human being is to understand what's happening inside you on the level of the body, of the brain, of the mind, so that you can predict what people will do. You can understand how they feel and you can, of course, once you understand and predict, you can usually also manipulate and control and even replace. And of course it can't be done perfectly and it was possible to do it to some extent also a century ago. But the difference in the level is significant. I would say that the real key is whether somebody can understand you better than you understand yourself. The algorithms that are trying to hack us, they will never be perfect. There is no such thing as understanding perfectly everything or predicting everything. You don't need perfect, you just need to be better than the average human being.

If you have an hour, please watch the video.

Thursday, November 1, 2018

Lesion network localization of free will

R. Ryan Darby, Juho Joutsa, Matthew J. Burke, and Michael D. Fox
PNAS
First published October 1, 2018

Abstract

Our perception of free will is composed of a desire to act (volition) and a sense of responsibility for our actions (agency). Brain damage can disrupt these processes, but which regions are most important for free will perception remains unclear. Here, we study focal brain lesions that disrupt volition, causing akinetic mutism (n = 28), or disrupt agency, causing alien limb syndrome (n = 50), to better localize these processes in the human brain. Lesion locations causing either syndrome were highly heterogeneous, occurring in a variety of different brain locations. We next used a recently validated technique termed lesion network mapping to determine whether these heterogeneous lesion locations localized to specific brain networks. Lesion locations causing akinetic mutism all fell within one network, defined by connectivity to the anterior cingulate cortex. Lesion locations causing alien limb fell within a separate network, defined by connectivity to the precuneus. Both findings were specific for these syndromes compared with brain lesions causing similar physical impairments but without disordered free will. Finally, our lesion-based localization matched network localization for brain stimulation locations that disrupt free will and neuroimaging abnormalities in patients with psychiatric disorders of free will without overt brain lesions. Collectively, our results demonstrate that lesions in different locations causing disordered volition and agency localize to unique brain networks, lending insight into the neuroanatomical substrate of free will perception.

The article is here.

How much control do you really have over your actions?

Michael Price
Sciencemag.org
Originally posted October 1, 2018

Here is an excerpt:

Philosophers have wrestled with questions of free will—that is, whether we are active drivers or passive observers of our decisions—for millennia. Neuroscientists tap-dance around it, asking instead why most of us feel like we have free will. They do this by looking at rare cases in which people seem to have lost it.

Patients with both alien limb syndrome and akinetic mutism have lesions in their brains, but there doesn’t seem to be a consistent pattern. So Darby and his colleagues turned to a relatively new technique known as lesion network mapping.

They combed the literature for brain imaging studies of both types of patients and mapped out all of their reported brain lesions. Then they plotted those lesions onto maps of brain regions that reliably activate together at the same time, better known as brain networks. Although the individual lesions in patients with the rare movement disorders appeared to occur without rhyme or reason, the team found, those seemingly arbitrary locations fell within distinct brain networks.

The researchers compared their results with those from people who lost some voluntary movement after receiving temporary brain stimulation, which uses low-voltage electrodes or targeted magnetic fields to temporarily “knock offline” brain regions.

The networks that caused loss of voluntary movement or agency in those studies matched Darby and colleagues’ new lesion networks. This suggests these networks are involved in voluntary movement and the perception that we’re in control of, and responsible for, our actions, the researchers report today in the Proceedings of the National Academy of Sciences.

The info is here.

Tuesday, April 10, 2018

Should We Root for Robot Rights?

Evan Selinger
Medium.com
Originally posted February 15, 2018

Here is an excerpt:

Maybe there’s a better way forward — one where machines aren’t kept firmly in their machine-only place, humans don’t get wiped out Skynet-style, and our humanity isn’t sacrificed by giving robots a better deal.

While the legal challenges ahead may seem daunting, they pose enticing puzzles for many thoughtful legal minds, who are even now diligently embracing the task. Annual conferences like We Robot — to pick but one example — bring together the best and the brightest to imagine and propose creative regulatory frameworks that would impose accountability in various contexts on designers, insurers, sellers, and owners of autonomous systems.

From the application of centuries-old concepts like “agency” to designing cutting-edge concepts for drones and robots on the battlefield, these folks are ready to explore the hard problems of machines acting with varying shades of autonomy. For the foreseeable future, these legal theories will include clear lines of legal responsibility for the humans in the loop, particularly those who abuse technology either intentionally or though carelessness.

The social impacts of our seemingly insatiable need to interact with our devices have been drawing accelerated attention for at least a decade. From the American Academy of Pediatrics creating recommendations for limiting screen time to updating etiquette and social mores for devices while dining, we are attacking these problems through both institutional and cultural channels.

The article is here.

Friday, December 22, 2017

Is Technology Value-Neutral? New Technologies and Collective Action Problems

John Danaher
Philosophical Disquisitions
Originally published December 3, 2017

Here is an excerpt:

Value-neutrality is a seductive position. For most of human history, technology has been the product of human agency. In order for a technology to come into existence, and have any effect on the world, it must have been conceived, created and utilised by a human being. There has been a necessary dyadic relationship between humans and technology. This has meant that whenever it comes time to evaluate the impacts of a particular technology on the world, there is always some human to share in the praise or blame. And since we are so comfortable with praising and blaming our fellow human beings, it’s very easy to suppose that they share all the praise and blame.

Note how I said that this has been true for ‘most of human history’. There is one obvious way in which technology could cease to be value-neutral: if technology itself has agency. In other words, if technology develops its own preferences and values, and acts to pursue them in the world. The great promise (and fear) about artificial intelligence is that it will result in forms of technology that do exactly that (and that can create other forms of technology that do exactly that). Once we have full-blown artificial agents, the value-neutrality thesis may no longer be so seductive.

We are almost there, but not quite. For the time being, it is still possible to view all technologies in terms of the dyadic relationship that makes value-neutrality more plausible.

The article is here.

Monday, August 7, 2017

Attributing Agency to Automated Systems: Reflectionson Human–Robot Collaborations and Responsibility-Loci

Sven Nyholm
Science and Engineering Ethics
pp 1–19

Many ethicists writing about automated systems (e.g. self-driving cars and autonomous weapons systems) attribute agency to these systems. Not only that; they seemingly attribute an autonomous or independent form of agency to these machines. This leads some ethicists to worry about responsibility-gaps and retribution-gaps in cases where automated systems harm or kill human beings. In this paper, I consider what sorts of agency it makes sense to attribute to most current forms of automated systems, in particular automated cars and military robots. I argue that whereas it indeed makes sense to attribute different forms of fairly sophisticated agency to these machines, we ought not to regard them as acting on their own, independently of any human beings. Rather, the right way to understand the agency exercised by these machines is in terms of human–robot collaborations, where the humans involved initiate, supervise, and manage the agency of their robotic collaborators. This means, I argue, that there is much less room for justified worries about responsibility-gaps and retribution-gaps than many ethicists think.

The article is here.

Thursday, September 22, 2016

Does Situationism Threaten Free Will and Moral Responsibility?

Michael McKenna and Brandon Warmke
Journal of Moral Psychology

Abstract

The situationist movement in social psychology has caused a considerable stir in philosophy over the last fifteen years. Much of this was prompted by the work of the philosophers Gilbert Harman (1999) and John Doris (2002). Both contended that familiar philosophical assumptions about the role of character in the explanation of human action were not supported by the situationists experimental results. Most of the ensuing philosophical controversy has focused upon issues related to moral psychology and ethical theory, especially virtue ethics. More recently, the influence of situationism has also given rise to further questions regarding free will and moral responsibility (e.g., Brink 2013; Ciurria 2013; Doris 2002; Mele and Shepherd 2013; Miller 2016; Nelkin 2005; Talbert 2009; and Vargas 2013b). In this paper, we focus just upon these latter issues. Moreover, we focus primarily on reasons-responsive theories. There is cause for concern that a range of situationist findings are in tension with the sort of reasons-responsiveness putatively required for free will and moral responsibility. Here, we develop and defend a response to the alleged situationist threat to free will and moral responsibility that we call pessimistic realism. We conclude on an optimistic note, however, exploring the possibility of strengthening our agency in the face of situational influences.

The article is here.

Thursday, August 4, 2016

Undermining Belief in Free Will Diminishes True Self-Knowledge

Elizabeth Seto and Joshua A. Hicks
Disassociating the Agent From the Self
Social Psychological and Personality Science 1948550616653810, first published on June 17, 2016 doi:10.1177/1948550616653810

Undermining the belief in free will influences thoughts and behavior, yet little research has explored its implications for the self and identity. The current studies examined whether lowering free will beliefs reduces perceived true self-knowledge. First, a new free will manipulation was validated. Next, in Study 1, participants were randomly assigned to high belief or low belief in free will conditions and completed measures of true self-knowledge. In Study 2, participants completed the same free will manipulation and a moral decision-making task. We then assessed participants’ perceived sense of authenticity during the task. Results illustrated that attenuating free will beliefs led to less self-knowledge, such that participants reported feeling more alienated from their true selves and experienced lowered perceptions of authenticity while making moral decisions. The interplay between free will and the true self are discussed.

Friday, December 11, 2015

Why do we intuitively believe we have free will?

By Tom Stafford
BBC.com
Originally published 7 August 2015

It is perhaps the most famous experiment in neuroscience. In 1983, Benjamin Libet sparked controversy with his demonstration that our sense of free will may be an illusion, a controversy that has only increased ever since.

Libet’s experiment has three vital components: a choice, a measure of brain activity and a clock.
The choice is to move either your left or right arm. In the original version of the experiment this is by flicking your wrist; in some versions of the experiment it is to raise your left or right finger. Libet’s participants were instructed to “let the urge [to move] appear on its own at any time without any pre-planning or concentration on when to act”. The precise time at which you move is recorded from the muscles of your arm.

The article is here.

Thursday, November 26, 2015

Inability and Obligation in Moral Judgment

Wesley Buckwalter and John Turri
PLOS
Published: August 21, 2015
DOI: 10.1371/journal.pone.0136589

Introduction

Morality is central to human social life [1–3]. Fulfilling moral obligations often requires us to put other people’s interests before our own. Sometimes this is easy, but other times it is hard. For example, it is plausible we are obligated to alleviate terrible suffering if we can do so at very little cost to ourselves, as happens when we donate money to famine relief or vaccination programs. But how far does this obligation extend? Some argue that it extends to the point where we would be making ourselves worse off than the people receiving charitable aid [4]. Many have found this suggestion implausible, sometimes on the grounds that the requirements for morality are limited by our psychology [5–7]. Given the way we are constituted, perhaps we are simply incapable of donating that much. This raises an important question: how demanding is morality and what are the limits of moral requirements?

According to a longstanding principle of moral philosophy, moral requirements are limited by ability. This is often glossed by the slogan that “ought implies can” (hereafter “OIC” for short). The principle says that one is obliged to perform an action only if one can perform the action. Support for OIC can be traced back to at least Cicero [8]. A more explicit articulation comes from Immanuel Kant, who writes, “Duty commands nothing but what we can do,” and that, “If the moral law commands that we ought to be better human beings now, it inescapably follows that we must be capable of being better human beings”.

The entire article is here.

Thursday, November 12, 2015

Neuroscientific Prediction and Free Will: What do ordinary people think?

By Gregg D. Caruso
Psychology Today Blog
Originally published October 26, 2015

Some theorists have argued that our knowledge of the brain will one day advance to the point where the perfect neuroscientific prediction of all human choices is theoretically possible. Whether or not such prediction ever becomes a reality, this possibility raises an interesting philosophical question: Would such perfect neuroscientific prediction be compatible with the existence of free will? Philosophers have long debated such questions. The historical debate between compatibilists and incompatibilists, for example, has centered on whether determinism and free will can be reconciled. Determinism is the thesis that every event or action, including human action, is the inevitable result of preceding events and actions and the laws of nature. The question of perfect neuro-prediction is just a more recent expression of this much older debate. While philosophers have their arguments for the compatibility or incompatibility of free will and determinism (or perfect neuroscientific prediction), they also often claim that their intuitions are in general agreement with commonsense judgments. To know whether this is true, however, we first need to know what ordinary folk think about these matters. Fortunately, recent research in psychology and experimental philosophy has begun to shed some light on this.

The entire article is here.

Monday, November 2, 2015

Does Disbelief in Free Will Increase Anti-Social Behavior?

By Gregg Caruso
Psychology Today Blog
Originally published October 16, 2015

Here is an excerpt:

Rather than defend free will skepticism, however, I would like to examine an important practical question: What if we came to disbelieve in free will and basic desert moral responsibility? What would this mean for our interpersonal relationships, society, morality, meaning, and the law? What would it do to our standing as human beings? Would it cause nihilism and despair as some maintain? Or perhaps increase anti-social behavior as some recent studies have suggested (more of this in a moment)? Or would it rather have a humanizing effect on our practices and policies, freeing us from the negative effects of free will belief? These questions are of profound pragmatic importance and should be of interest independent of the metaphysical debate over free will. As public proclamations of skepticism continue to rise, and as the media continues to run headlines proclaiming that free will is an illusion, we need to ask what effects this will have on the general public and what the responsibility is of professionals.

In recent years a small industry has actually grown up around precisely these questions. In the skeptical community, for example, a number of different positions have been developed and advanced—including Saul Smilansky’s illusionism, Thomas Nadelhoffer’s disillusionism, Shaun Nichols’ anti-revolution, and the optimistic skepticism of Derk Pereboom, Bruce Waller, and myself.

The entire article is here.

Sunday, September 20, 2015

Monday, August 10, 2015

The dawn of artificial intelligence

Powerful computers will reshape humanity’s future. How to ensure the promise outweighs the perils

The Economist
Originally published May 9, 2015

“The development of full artificial intelligence could spell the end of the human race,” Stephen Hawking warns. Elon Musk fears that the development of artificial intelligence, or AI, may be the biggest existential threat humanity faces. Bill Gates urges people to beware of it.

Dread that the abominations people create will become their masters, or their executioners, is hardly new. But voiced by a renowned cosmologist, a Silicon Valley entrepreneur and the founder of Microsoft—hardly Luddites—and set against the vast investment in AI by big firms like Google and Microsoft, such fears have taken on new weight. With supercomputers in every pocket and robots looking down on every battlefield, just dismissing them as science fiction seems like self-deception. The question is how to worry wisely.

The entire article is here.

Tuesday, July 14, 2015

Consciousness has less control than believed

San Francisco State University
Press Release
Originally released June 23, 2015

Consciousness -- the internal dialogue that seems to govern one's thoughts and actions -- is far less powerful than people believe, serving as a passive conduit rather than an active force that exerts control, according to a new theory proposed by an SF State researcher.

Associate Professor of Psychology Ezequiel Morsella's "Passive Frame Theory" suggests that the conscious mind is like an interpreter helping speakers of different languages communicate.

"The interpreter presents the information but is not the one making any arguments or acting upon the knowledge that is shared," Morsella said. "Similarly, the information we perceive in our consciousness is not created by conscious processes, nor is it reacted to by conscious processes. Consciousness is the middle-man, and it doesn't do as much work as you think."

Morsella and his coauthors' groundbreaking theory, published online on June 22 by the journal Behavioral and Brain Sciences, contradicts intuitive beliefs about human consciousness and the notion of self.

The entire pressor is here.

Friday, July 10, 2015

Against a singular understanding of legal capacity: Criminal responsibility and the Convention on the Rights of Persons with Disabilities

By Jullian Craigie
International Journal of Law and Psychiatry
Volume 40, May–June 2015, Pages 6–14

Abstract

The United Nations Convention on the Rights of Persons with Disabilities (CRPD) is being used to argue for wider recognition of the legal capacity of people with mental disabilities. This raises a question about the implications of the Convention for attributions of criminal responsibility. The present paper works towards an answer by analysing the relationship between legal capacity in relation to personal decisions and criminal acts. Its central argument is that because moral and political considerations play an essential role in setting the relevant standards, legal capacity in the context of personal decisions and criminal acts should not be thought of as two sides of the same coin. The implications of particular moral or political norms are likely to be different in these two legal contexts, and this may justify asymmetries in the relevant standards for legal capacity. However, the analysis highlights a fundamental question about how much weight moral or political considerations should be given in setting these standards, and this is used to frame a challenge to those calling for significantly wider recognition of the legal capacity of people with mental disabilities on the basis of the Convention.

The entire article is here.

Wednesday, July 8, 2015

How could they?

By Tage Rai
Aeon Magazine
Originally published June 18, 2015

Here is an excerpt:

It would be easier to live in a world where perpetrators believe that violence is wrong and engage in it anyway. That is not the world we live in. While our refusal to acknowledge this basic fact may have helped to orient our own moral compass, it has also stood in the way of interventions that might actually reduce harm. Let’s put aside the philosophical questions that arise once we accept that there is moral disagreement about violence. How does the message that violence is morally motivated aid our efforts to reduce it?

For years, we have been trying to reduce crime by enacting mass incarceration, by placing restrictions on the mentally ill, and by teaching potential perpetrators how to exercise more self-control. On the face of it, these all sound like plausible strategies. But all of them miss their target.

One of the most robust findings in criminology is that increasing the severity of punishment has little deterrent effect. People simply aren’t as sensitive to the potential costs of crime as the rational-choice model predicts they should be, and so efforts to reduce it by cracking down have failed to justify the immense fiscal and social costs of mass incarceration. Meanwhile, because most violent crimes are committed by psychologically healthy individuals, legislation that focuses on the mentally ill – for example, by stopping them from buying guns – would lead to only a small reduction.

The entire article is here.

Tuesday, July 7, 2015

Free Will Skepticism and Its Implications: An Argument for Optimism

By Gregg Caruso
For Free Will Skepticism in Law and Society, ed. Elizabeth Shaw & Derk Pereboom

Here is an excerpt:

     What, then, would be the consequence of accepting free will skepticism? What if we came to disbelieve in free will and moral responsibility? What would this mean for our interpersonal relationships, society, morality, meaning, and the law? What would it do to our standing as human beings? Would it cause nihilism and despair as some maintain? Or perhaps increase anti-social behavior as some recent studies have suggested (Vohs and Schooler 2008; Baumeister, Masicampo, and DeWall 2009)? Or would it rather have a humanizing effect on our  practices and policies, freeing us from the negative effects of free will belief? These questions are of profound pragmatic importance and should be of interest independent of the metaphysical debate over free will. As public proclamations of skepticism continue to rise, and as the mass media continues to run headlines announcing "Free will is an illusion" and "Scientists say free will probably doesn't exist,"we need to ask what effects this will have on the general public and what the responsibility is of professionals.