Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Agency. Show all posts
Showing posts with label Agency. Show all posts

Tuesday, April 16, 2024

As A.I.-Controlled Killer Drones Become Reality, Nations Debate Limits

Eric Lipton
The New York Times
Originally posted 21 Nov 23

Here is an excerpt:

Rapid advances in artificial intelligence and the intense use of drones in conflicts in Ukraine and the Middle East have combined to make the issue that much more urgent. So far, drones generally rely on human operators to carry out lethal missions, but software is being developed that soon will allow them to find and select targets more on their own.

The intense jamming of radio communications and GPS in Ukraine has only accelerated the shift, as autonomous drones can often keep operating even when communications are cut off.

“This isn’t the plot of a dystopian novel, but a looming reality,” Gaston Browne, the prime minister of Antigua and Barbuda, told officials at a recent U.N. meeting.

Pentagon officials have made it clear that they are preparing to deploy autonomous weapons in a big way.

Deputy Defense Secretary Kathleen Hicks announced this summer that the U.S. military would “field attritable, autonomous systems at scale of multiple thousands” in the coming two years, saying that the push to compete with China’s own investment in advanced weapons necessitated that the United States “leverage platforms that are small, smart, cheap and many.”

The concept of an autonomous weapon is not entirely new. Land mines — which detonate automatically — have been used since the Civil War. The United States has missile systems that rely on radar sensors to autonomously lock on to and hit targets.

What is changing is the introduction of artificial intelligence that could give weapons systems the capability to make decisions themselves after taking in and processing information.


Here is a summary:

This article discusses the debate at the UN regarding Lethal Autonomous Weapons (LAW) - essentially autonomous drones with AI that can choose and attack targets without human intervention. There are concerns that this technology could lead to unintended casualties, make wars more likely, and remove the human element from the decision to take a life.
  • Many countries are worried about the development and deployment of LAWs.
  • Austria and other countries are proposing a total ban on LAWs or at least strict regulations requiring human control and limitations on how they can be used.
  • The US, Russia, and China are opposed to a ban and argue that LAWs could potentially reduce civilian casualties in wars.
  • The US prefers non-binding guidelines over new international laws.
  • The UN is currently deadlocked on the issue with no clear path forward for creating regulations.

Friday, December 1, 2023

To Lead a Meaningful Life, Become Your Own Hero

B. Rogers, K. Gray, & M. Christian
Scientific American
Originally published 30 OCT 23

Here is an excerpt:

With our condensed version of the hero’s journey, we looked at the connection between how people told their life story and their feelings of meaning in life. Across four separate studies, we collected life stories from more than 1,200 people, including online participants and a group of middle-aged adults in Chicago. We also used questionnaires to measure the storytelling participants’ sense of meaning in life, amount of life satisfaction and level of depression.

We then examined these stories for the seven elements of the hero’s journey. We found that people who had more hero’s journey elements in their life stories reported more meaning in life, more flourishing and less depression. These “heroic” people (men and women were equally likely to see their life as a hero’s journey) reported a clearer sense of themselves than other participants did and more new adventures, strong goals, good friends, and so on.

We also found that hero’s journey narratives provided more benefits than other ones, including a basic “redemptive” narrative, where a person’s life story goes from defeat to triumph. Of course, redemption is often a part of the “transformation” part of the hero’s journey, but compared with people whose life story contained only the redemptive narrative, those with a full hero’s journey reported more meaning in life.

We then wondered whether altering one’s life story to be more “heroic” would increase feelings of meaning in life. We developed a “restorying” intervention in which we prompted people to retell their story as a hero’s journey. Participants first identified each of the seven elements in their life, and then we encouraged them to weave these pieces together into a coherent narrative.

In six studies with more than 1,700 participants, we confirmed that this restorying intervention worked: it helped people see their life as a hero’s journey, which in turn made that life feel more meaningful. Intervention recipients also reported higher well-being and became more resilient in the face of personal challenges; these participants saw obstacles more positively and dealt with them more creatively.


Here is a take for clinicians:

Here are some specific ways that therapists can use the hero's journey framework in psychotherapy:
  • Help clients to identify their values and goals. This can be done through a variety of exercises, such as writing exercises, role-playing, and journaling.
  • Help clients to develop a plan to achieve their goals. This may involve setting realistic goals, developing a timeline, and identifying resources and support systems.
  • Help clients to identify and overcome the challenges that are holding them back. This may involve addressing negative beliefs, developing coping skills, and processing past traumas.
  • Help clients to explore their purpose and find ways to live a life that is true to themselves. This may involve exploring their interests, values, and strengths.
The hero's journey is a powerful framework that can be used to help people find meaning and purpose in their lives. By framing their lives as hero's journeys, people can develop a greater sense of agency and control over their lives. They can also become more resilient in the face of challenges and setbacks.

Saturday, October 28, 2023

Meaning from movement and stillness: Signatures of coordination dynamics reveal infant agency

Sloan, A. T., Jones, N. A., et al. (2023).
PNAS, 120 (39) e2306732120

Abstract

How do human beings make sense of their relation to the world and realize their ability to effect change? Applying modern concepts and methods of coordination dynamics, we demonstrate that patterns of movement and coordination in 3 to 4-mo-olds may be used to identify states and behavioral phenotypes of emergent agency. By means of a complete coordinative analysis of baby and mobile motion and their interaction, we show that the emergence of agency can take the form of a punctuated self-organizing process, with meaning found both in movement and stillness.

Significance

Revamping one of the earliest paradigms for the investigation of infant learning, and moving beyond reinforcement accounts, we show that the emergence of agency in infants can take the form of a bifurcation or phase transition in a dynamical system that spans the baby, the brain, and the environment. Individual infants navigate functional coupling with the world in different ways, suggesting that behavioral phenotypes of agentive discovery exist—and dynamics provides a means to identify them. This phenotyping method may be useful for identifying babies at risk.

Here is my take:

Importantly, researchers found that the emergence of agency can take the form of a punctuated self-organizing process, with meaning found both in movement and stillness.

The findings of this study suggest that infants are not simply passive observers of the world around them, but rather active participants in their own learning and development. The researchers believe that their work could have implications for the early identification of infants at risk for developmental delays.

Here are some of the key takeaways from the study:
  • Infants learn to make sense of their relation to the world through their movement and interaction with their environment.
  • The emergence of agency is a punctuated, self-organizing process that occurs in both movement and stillness.
  • Individual infants navigate functional coupling with the world in different ways, suggesting that behavioral phenotypes of agentive discovery exist.
  • Dynamics provides a means to identify behavioral phenotypes of agentive discovery, which may be useful for identifying babies at risk.
  • This study is a significant contribution to our understanding of how infants learn and develop. It provides new insights into the role of movement and stillness in the emergence of agency and consciousness. The findings of this study have the potential to improve our ability to identify and support infants at risk for developmental delays.

Wednesday, October 18, 2023

Responsible Agency and the Importance of Moral Audience

Jefferson, A., & Sifferd, K. 
Ethic Theory Moral Prac 26, 361–375 (2023).

Abstract

Ecological accounts of responsible agency claim that moral feedback is essential to the reasons-responsiveness of agents. In this paper, we discuss McGeer’s scaffolded reasons-responsiveness account in the light of two concerns. The first is that some agents may be less attuned to feedback from their social environment but are nevertheless morally responsible agents – for example, autistic people. The second is that moral audiences can actually work to undermine reasons-responsiveness if they espouse the wrong values. We argue that McGeer’s account can be modified to handle both problems. Once we understand the specific roles that moral feedback plays for recognizing and acting on moral reasons, we can see that autistics frequently do rely on such feedback, although it often needs to be more explicit. Furthermore, although McGeer is correct to highlight the importance of moral feedback, audience sensitivity is not all that matters to reasons-responsiveness; it needs to be tempered by a consistent application of moral rules. Agents also need to make sure that they choose their moral audiences carefully, paying special attention to receiving feedback from audiences which may be adversely affected by their actions.


Here is my take:

Responsible agency is the ability to act on the right moral reasons, even when it is difficult or costly. Moral audience is the group of people whose moral opinions we care about and respect.

According to the authors, moral audience plays a crucial role in responsible agency in two ways:
  1. It helps us to identify and internalize the right moral reasons. We learn about morality from our moral audience, and we are more likely to act on moral reasons if we know that our audience would approve of our actions.
  2. It provides us with motivation to act on moral reasons. We are more likely to do the right thing if we know that our moral audience will be disappointed in us if we don't.
The authors argue that moral audience is particularly important for responsible agency in novel contexts, where we may not have clear guidance from existing moral rules or norms. In these situations, we need to rely on our moral audience to help us to identify and act on the right moral reasons.

The authors also discuss some of the challenges that can arise when we are trying to identify and act on the right moral reasons. For example, our moral audience may have different moral views than we do, or they may be biased in some way. In these cases, we need to be able to critically evaluate our moral audience's views and make our own judgments about what is right and wrong.

Overall, the article makes a strong case for the importance of moral audience in developing and maintaining responsible agency. It is important to have a group of people whose moral opinions we care about and respect, and to be open to their feedback. This can help us to become more morally responsible agents.

Saturday, July 9, 2022

Techno-Optimism: An Analysis, an Evaluation and a Modest Defence

Danaher, J. 
Philos. Technol. 35, 54 (2022). https://doi.org/10.1007/s13347-022-00550-2

Abstract

What is techno-optimism and how can it be defended? Although techno-optimist views are widely espoused and critiqued, there have been few attempts to systematically analyse what it means to be a techno-optimist and how one might defend this view. This paper attempts to address this oversight by providing a comprehensive analysis and evaluation of techno-optimism. It is argued that techno-optimism is a pluralistic stance that comes in weak and strong forms. These vary along a number of key dimensions but each shares the view that technology plays a key role in ensuring that the good prevails over the bad. Whatever its strength, to defend this stance, one must flesh out an argument with four key premises. Each of these premises is highly controversial and can be subjected to a number of critiques. The paper discusses five such critiques in detail (the values critique, the treadmill critique, the sustainability critique, the irrationality critique and the insufficiency critique). The paper also considers possible responses from the techno-optimist. Finally, it is concluded that although strong forms of techno-optimism are not intellectually defensible, a modest, agency-based version of techno-optimism may be defensible.

Here is an excerpt:

To be more precise, a modest, agency-based view of techno-optimism entails the following four claims. First, it is epistemically rational to believe that it is at least possible (perhaps probable) that technology plays a key role in ensuring that the good prevails over the bad. Second, whether this possibility materialises depends to some meaningful extent on the power of collective human agency. If we select the right goals, make the concerted effort, and build the necessary institutions, there is a chance that the possibility materialises. Third, by believing that we can, collectively, achieve this, we increase the likelihood of this possibility materialising because we make it more likely that we will act in ways that ensure the desired outcomes (this is the adaptation of Bortolotti’s agency-based optimism to the case for techno-optimism). Fourth, it follows from that that we should cultivate the belief that we can achieve this and act upon that belief. In other words, that our optimism should not simply be an inert belief but, rather, a belief that actually motivates our collective human agency.

If the agency-based view is incorporated into it, techno-optimism can then be an intellectually defensible view. It need not be an irrational faith in the inexorable march of technology but, rather, a realistic stance grounded in the transformational power of collective human agency to forge the right social institutions and to translate the right ideas into material technologies.

Tuesday, March 15, 2022

The Moral Consideration of Artificial Entities: A Literature Review

Harris, J., Anthis, J.R. 
Sci Eng Ethics 27, 53 (2021). 
https://doi.org/10.1007/s11948-021-00331-8

Abstract

Ethicists, policy-makers, and the general public have questioned whether artificial entities such as robots warrant rights or other forms of moral consideration. There is little synthesis of the research on this topic so far. We identify 294 relevant research or discussion items in our literature review of this topic. There is widespread agreement among scholars that some artificial entities could warrant moral consideration in the future, if not also the present. The reasoning varies, such as concern for the effects on artificial entities and concern for the effects on human society. Beyond the conventional consequentialist, deontological, and virtue ethicist ethical frameworks, some scholars encourage “information ethics” and “social-relational” approaches, though there are opportunities for more in-depth ethical research on the nuances of moral consideration of artificial entities. There is limited relevant empirical data collection, primarily in a few psychological studies on current moral and social attitudes of humans towards robots and other artificial entities. This suggests an important gap for psychological, sociological, economic, and organizational research on how artificial entities will be integrated into society and the factors that will determine how the interests of artificial entities are considered.

Concluding Remarks

Many scholars lament that the moral consideration of artificial entities is discussed infrequently and not viewed as a proper object of academic inquiry. This literature review suggests that these perceptions are no longer entirely accurate. The number of publications is growing exponentially, and most scholars view artificial entities as potentially warranting moral consideration. Still, there are important gaps remaining, suggesting promising opportunities for further research, and the field remains small overall with only 294 items identified in this review.

These discussions have taken place largely separately from each other: legal rights, moral consideration, empirical research on human attitudes, and theoretical exploration of the risks of astronomical suffering among future artificial entities. Further contributions should seek to better integrate these discussions. The analytical frameworks used in one topic may offer valuable contributions to another. For example, what do legal precedent and empirical psychological research suggest are the most likely outcomes for future artificial sentience (as an example of studying likely technological outcomes, see Reese and Mohorčich, 2019)? What do virtue ethics and rights theories suggest is desirable in these plausible future scenarios?

Despite interest in the topic from policy-makers and the public, there is a notable lack of empirical data about attitudes towards the moral consideration of artificial entities. This leaves scope for surveys and focus groups on a far wider range of predictors of attitudes, experiments that test the effect of various messages and content on these attitudes, and qualitative and computational text analysis of news articles, opinion pieces, and science fiction books and films that touch on these topics. There are also many theoretically interesting questions to be asked about how these attitudes relate to other facets of human society, such as human in-group-out-group and human-animal interactions.

Wednesday, December 15, 2021

Voice-hearing across the continuum: a phenomenology of spiritual voices

Moseley, P., et al. (2021, November 16).
https://doi.org/10.31234/osf.io/7z2at

Abstract

Voice-hearing in clinical and non-clinical groups has previously been compared using standardized assessments of psychotic experiences. Findings from several studies suggest that non-clinical voice-hearing (NCVH) is distinguished by reduced distress and increased control. However, symptom-rating scales developed for clinical populations may be limited in their ability to elucidate subtle and unique aspects of non-clinical voices. Moreover, such experiences often occur within specific contexts and systems of belief, such as spiritualism. This makes direct comparisons difficult to interpret. Here we present findings from a comparative interdisciplinary study which administered a semi-structured interview to NCVH individuals and psychosis patients. The non-clinical group were specifically recruited from spiritualist communities. The findings were consistent with previous results regarding distress and control, but also documented multiple modalities that were often integrated into a single entity, high levels of associated visual imagery, and subtle differences in the location of voices relating to perceptual boundaries. Most spiritual voice-hearers reported voices before encountering spiritualism, suggesting that their onset was not solely due to deliberate practice. Future research should aim to understand how spiritual voice-hearers cultivate and control voice-hearing after its onset, which may inform interventions for people with distressing voices.

From the Discussion

As has been reported in previous studies, the ability to exhibit control over or influence voices seems to be an important difference between experiences reported by clinical and non-clinical groups.  A key distinction here is between volitional control (ability to bring on or stop voices intentionally), and the ability to influence voices (through other strategies such as engagement or distraction from voices), referred to elsewhere as direct and in direct control.  In the present study, the spiritual group reported substantially higher levels of control and influence over voices, compared to patients. Importantly, nearly three-quarters of the group reported a change in their ability to influence the voices over time –compared to 12.5% of psychosis patients–suggesting that this ability is not always present from the onset of voice-hearing in non-clinical populations, and instead can be actively developed. Indeed, our analysis indicated that 88.5% of the spiritual group described their voices starting spontaneously, with 69.2% reporting that this was before they had contact with spiritualism itself. Thus, while most of the group (96.2%) reported ongoing cultivation of the voices, and often reported developing influence over time, it seems that spiritual practices mostly do not elicit the actual initial onset of the voices, instead playing a role in honing the experience. 

Monday, August 23, 2021

Deconstructing Moral Character Judgments

Hartman, R., Blakey, W., & Gray, K.
Current Opinions in Psychology

Abstract

People often make judgments of others' moral character-an inferred moral essence that presumably predicts moral behavior. We first define moral character and explore why people make character judgments before outlining three key elements that drive character judgments: behavior (good vs. bad, norm violations, and deliberation), mind (intentions, explanations, capacities), and identity (appearance, social groups, and warmth). We also provide a taxonomy of moral character that goes beyond simply good vs. evil. Drawing from the Theory of Dyadic Morality, we outline a two-dimensional triangular space of character judgments (valence and strength/agency), with three key corners-heroes, villains, and victims. Varieties of perceived moral character include saints and demons, strivers/sinners and opportunists, the non-moral, virtuous and culpable victims, and pure victims.

Conclusion 

It seems obvious that people make summary judgments of others’ moral character, but less obvious is how exactly that make those judgments. We suggest that people rely upon behavior, identity, and perceived mind when inferring the moral essence of others. We acknowledge that this list is certainly incomplete and will be expanded with future research. One key area of expansion explored here is the importance of perceived strength/agency in character judgments, which helps provide a taxonomy of character types. Whatever the exact varieties and drivers of moral character judgments, these judgments are clearly an important foundation of social life.

Monday, July 12, 2021

Workplace automation without achievement gaps: a reply to Danaher and Nyholm

Tigard, D.W. 
AI Ethics (2021). 
https://doi.org/10.1007/s43681-021-00064-1

Abstract

In a recent article in this journal, John Danaher and Sven Nyholm raise well-founded concerns that the advances in AI-based automation will threaten the values of meaningful work. In particular, they present a strong case for thinking that automation will undermine our achievements, thereby rendering our work less meaningful. It is also claimed that the threat to achievements in the workplace will open up ‘achievement gaps’—the flipside of the ‘responsibility gaps’ now commonly discussed in technology ethics. This claim, however, is far less worrisome than the general concerns for widespread automation, namely because it rests on several conceptual ambiguities. With this paper, I argue that although the threat to achievements in the workplace is problematic and calls for policy responses of the sort Danaher and Nyholm outline, when framed in terms of responsibility, there are no ‘achievement gaps’.

From the Conclusion

In closing, it is worth stopping to ask: Who exactly is the primary subject of “harm” (broadly speaking) in the supposed gap scenarios? Typically, in cases of responsibility gaps, the harm is seen as falling upon the person inclined to respond (usually with blame) and finding no one to respond to. This is often because they seek apologies or some sort of remuneration, and as we can imagine, it sets back their interests when such demands remain unfulfilled. But what about cases of achievement gaps? If we want to draw truly close analogies between the two scenarios, we would consider the subject of harm to be the person inclined to respond with praise and finding no one to praise. And perhaps there is some degree of disappointment here, but it hardly seems to be a worrisome kind of experience for that person. With this in mind, we might say there is yet another mismatch between responsibility gaps and achievement gaps. Nevertheless, on the account of Danaher and Nyholm, the harm is seen as falling upon the humans who miss out on achieving something in the workplace. But on that picture, we run into a sort of non-identity problem—for as soon as we identify the subjects of this kind of harm, we thereby affirm that it is not fitting to praise them for the workplace achievement, and so they cannot really be harmed in this way.

Sunday, July 4, 2021

Understanding Side-Effect Intentionality Asymmetries: Meaning, Morality, or Attitudes and Defaults?

Laurent SM, Reich BJ, Skorinko JLM. 
Personality and Social Psychology Bulletin. 
2021;47(3):410-425. 
doi:10.1177/0146167220928237

Abstract

People frequently label harmful (but not helpful) side effects as intentional. One proposed explanation for this asymmetry is that moral considerations fundamentally affect how people think about and apply the concept of intentional action. We propose something else: People interpret the meaning of questions about intentionally harming versus helping in fundamentally different ways. Four experiments substantially support this hypothesis. When presented with helpful (but not harmful) side effects, people interpret questions concerning intentional helping as literally asking whether helping is the agents’ intentional action or believe questions are asking about why agents acted. Presented with harmful (but not helpful) side effects, people interpret the question as asking whether agents intentionally acted, knowing this would lead to harm. Differences in participants’ definitions consistently helped to explain intentionality responses. These findings cast doubt on whether side-effect intentionality asymmetries are informative regarding people’s core understanding and application of the concept of intentional action.

From the Discussion

Second, questions about intentionality of harm may focus people on two distinct elements presented in the vignette: the agent’s  intentional action  (e.g., starting a profit-increasing program) and the harmful secondary outcome he knows this goal-directed action will cause. Because the concept of intentionality is most frequently applied to actions rather than consequences of actions (Laurent, Clark, & Schweitzer, 2015), reframing the question as asking about an intentional action undertaken with foreknowledge of harm has advantages. It allows consideration of key elements from the story and is responsive to what people may feel is at the heart of the question: “Did the chairman act intentionally, knowing this would lead to harm?” Notably, responses to questions capturing this idea significantly mediated intentionality responses in each experiment presented here, whereas other variables tested failed to consistently do so. 

Wednesday, March 3, 2021

Evolutionary biology meets consciousness: essay review

Browning, H., Veit, W. 
Biol Philos 36, 5 (2021). 
https://doi.org/10.1007/s10539-021-09781-7

Abstract

In this essay, we discuss Simona Ginsburg and Eva Jablonka’s The Evolution of the Sensitive Soul from an interdisciplinary perspective. Constituting perhaps the longest treatise on the evolution of consciousness, Ginsburg and Jablonka unite their expertise in neuroscience and biology to develop a beautifully Darwinian account of the dawning of subjective experience. Though it would be impossible to cover all its content in a short book review, here we provide a critical evaluation of their two key ideas—the role of Unlimited Associative Learning in the evolution of, and detection of, consciousness and a metaphysical claim about consciousness as a mode of being—in a manner that will hopefully overcome some of the initial resistance of potential readers to tackle a book of this length.

Here is one portion:

Modes of being

The second novel idea within their book is to conceive of consciousness as a new mode of being, rather than a mere trait. This part of their argument may appear unusual to many operating in the debate, not the least because this formulation—not unlike their choice to include Aristotle’s sensitive soul in the title—evokes a sense of outdated and strange metaphysics. We share some of this opposition to this vocabulary, but think it best conceived as a metaphor.

They begin their book by introducing the idea of teleological (goal-directed) systems and the three ‘modes of being’, taken from the works of Aristotle, each of which is considered to have a unique telos (goal). These are: life (survival/reproduction), sentience (value ascription to stimuli), and rationality (value ascription to concepts). The focus of this book is the second of these—the “sensitive soul”. Rather than a trait, such as vision, G&J see consciousness as a mode of being, in the same way as the emergence of life and rational thought also constitute new modes of being.

In several places throughout their book, G&J motivate their account through this analogy, i.e. by drawing a parallel from consciousness to life and/or rationality. Neither, they think, can be captured in a simple definition or trait, thus explaining the lack of progress on trying to come up with definitions for these phenomena. Compare their discussion of the distinction between life and non-life. Life, they argue, is not a functional trait that organisms possess, but rather a new way of being that opens up new possibilities; so too with consciousness. It is a new form of biological organization at a level above the organism that gives rise to a “new type of goal-directed system”, one which faces a unique set of challenges and opportunities. They identify three such transitions—the transition from non-life to life (the “nutritive soul”), the transition from non-conscious to conscious (the “sensitive soul”) and the transition from non-rational to rational (the “rational soul”). All three transitions mark a change to a new form of being, one in which the types of goals change. But while this is certainly correct in the sense of constituting a radical transformation in the kinds of goal-directed systems there are, we have qualms with the idea that this formal equivalence or abstract similarity can be used to ground more concrete properties. Yet G&J use this analogy to motivate their UAL account in parallel to unlimited heredity as a transition marker of life.

Sunday, February 14, 2021

Robot sex and consent: Is consent to sex between a robot and a human conceivable, possible, and desirable?

Frank, L., Nyholm, S. 
Artif Intell Law 25, 305–323 (2017).
https://doi.org/10.1007/s10506-017-9212-y

Abstract

The development of highly humanoid sex robots is on the technological horizon. If sex robots are integrated into the legal community as “electronic persons”, the issue of sexual consent arises, which is essential for legally and morally permissible sexual relations between human persons. This paper explores whether it is conceivable, possible, and desirable that humanoid robots should be designed such that they are capable of consenting to sex. We consider reasons for giving both “no” and “yes” answers to these three questions by examining the concept of consent in general, as well as critiques of its adequacy in the domain of sexual ethics; the relationship between consent and free will; and the relationship between consent and consciousness. Additionally we canvass the most influential existing literature on the ethics of sex with robots.

Here is an excerpt:

Here, we want to ask a similar question regarding how and whether sex robots should be brought into the legal community. Our overarching question is: is it conceivable, possible, and desirable to create autonomous and smart sex robots that are able to give (or withhold) consent to sex with a human person? For each of these three sub-questions (whether it is conceivable, possible, and desirable to create sex robots that can consent) we consider both “no” and “yes” answers. We are here mainly interested in exploring these questions in general terms and motivating further discussion. However, in discussing each of these sub-questions we will argue that, prima facie, the “yes” answers appear more convincing than the “no” answers—at least if the sex robots are of a highly sophisticated sort.Footnote4

The rest of our discussion divides into the following sections. We start by saying a little more about what we understand by a “sex robot”. We also say more about what consent is, and we review the small literature that is starting to emerge on our topic (Sect. 1). We then turn to the questions of whether it is conceivable, possible, and desirable to create sex robots capable of giving consent—and discuss “no” and “yes” answers to all of these questions. When we discuss the case for considering it desirable to require robotic consent to sex, we argue that there can be both non-instrumental and instrumental reasons in favor of such a requirement (Sects. 2–4). We conclude with a brief summary (Sect. 5).

Friday, December 18, 2020

Are Free Will Believers Nicer People? (Four Studies Suggest Not)

Crone DL, & Levy NL. 
Social Psychological and 
Personality Science. 2019;10(5):612-619. 
doi:10.1177/1948550618780732

Abstract

Free will is widely considered a foundational component of Western moral and legal codes, and yet current conceptions of free will are widely thought to fit uncomfortably with much research in psychology and neuroscience. Recent research investigating the consequences of laypeople’s free will beliefs (FWBs) for everyday moral behavior suggests that stronger FWBs are associated with various desirable moral characteristics (e.g., greater helpfulness, less dishonesty). These findings have sparked concern regarding the potential for moral degeneration throughout society as science promotes a view of human behavior that is widely perceived to undermine the notion of free will. We report four studies (combined N = 921) originally concerned with possible mediators and/or moderators of the abovementioned associations. Unexpectedly, we found no association between FWBs and moral behavior. Our findings suggest that the FWB–moral behavior association (and accompanying concerns regarding decreases in FWBs causing moral degeneration) may be overstated.

(Bold added by me.)

Saturday, May 30, 2020

Self-Nudging and the Citizen Choice Architect

Samuli Reijula, Ralph Hertwig.
Behavioural Public Policy, 2020
DOI: 10.1017/bpp.2020.5

Abstract

This article argues that nudges can often be turned into self-nudges: empowering interventions that enable people to design and structure their own decision environments—that is, to act as citizen choice architects. Self-nudging applies insights from behavioral science in a way that is practicable and cost-effective but that sidesteps concerns about paternalism or manipulation. It has the potential to expand the scope of application of behavioral insights from the public to the personal sphere (e.g., homes, offices, families). It is a tool for reducing failures of self-control and enhancing personal autonomy; specifically, self-nudging can mean designing one’s proximate choice architecture to alleviate the effects of self-control problems, engaging in education to understand the nature and causes of self-control problems, and employing simple educational nudges to improve goal attainment in various domains. It can even mean self-paternalistic interventions such as winnowing down one’s choice set by, for instance, removing options.  Policy makers could promote self-nudging by sharing knowledge about nudges and how they work. The ultimate goal of the self-nudging approach is to enable citizen choice architects’ efficient self-governance, where reasonable, and the self-determined arbitration of conflicts between their mutually exclusive goals and preferences.

From the Conclusion:

Commercial choice architects have become proficient in hijacking people’s attention and desires (see, e.g., Nestle 2013; Nestle 2015; Cross and Proctor 2014; Wu 2017), making it difficult for consumers to exercise agency and freedom of choice. Even in the best of circumstances, the potential for public choice architects to nudge people toward better choices in their personal and proximate choice environments is limited. Against this background, we suggest that policy makers should consider the possibility of empowering individuals to make strategic changes in their proximate choice architecture. There is no reason why citizens should not be informed about nudges that can be turned into self-nudges and, more generally, about the design principles of choice environments (e.g., defaults, framing, cognitive accessibility). We suggest that self-nudging is an untapped resource that sidesteps various ethical and practical problems associated with nudging and can empower people to make better everyday choices. This does not mean that regulation or nudging should be replaced by self-nudging; indeed, self-nudging can benefit enormously from the ingenuity of the nudging approach and the evidence accumulating on it. But, as the adage goes, give someone a fish, and you need them for a day. Teach someone to fish, and you feed them for a lifetime. We believe that sharing  behavioral insights from psychology and behavioral economics will provide citizens with a the citizen choice architect means for taking back power, giving them more control over the design of their proximate choice environments–in other words, qualifying them as citizen choice architects.

The article is here.

Friday, April 17, 2020

Toward equipping Artificial Moral Agents with multiple ethical theories

George Rautenbach and C. Maria Keet
arXiv:2003.00935v1 [cs.CY] 2 Mar 2020

Abstract

Artificial Moral Agents (AMA’s) is a field in computer science with the purpose of creating autonomous machines that can make moral decisions akin to how humans do. Researchers have proposed theoretical means of creating such machines, while philosophers have made arguments as to how these machines ought to behave, or whether they should even exist.

Of the currently theorised AMA’s, all research and design has been done with either none or at most one specified normative ethical theory as basis. This is problematic because it narrows down the AMA’s functional ability and versatility which in turn causes moral outcomes that a limited number of people agree with (thereby undermining an AMA’s ability to be moral in a human sense). As solution we design a three-layer model for general normative ethical theories that can be used to serialise the ethical views of people and businesses for an AMA to use during reasoning. Four specific ethical norms (Kantianism, divine command theory, utilitarianism, and egoism) were modelled and evaluated as proof of concept for normative modelling. Furthermore, all models were serialised to XML/XSD as proof of support for computerisation.

From the Discussion:

A big philosophical grey area in AMA’s is with regards to agency. That is, an entity’s ability to
understand available actions and their moral values and to freely choose between them. Whether
or not machines can truly understand their decisions and whether they can be held accountable
for them is a matter of philosophical discourse. Whatever the answer may be, AMA agency
poses a difficult question that must be addressed.

The question is as follows: should the machine act as an agent itself, or should it act as an informant for another agent? If an AMA reasons for another agent (e.g., a person) then reasoning will be done with that person as the actor and the one who holds responsibility. This has the disadvantage of putting that person’s interest before other morally considerable entities, especially with regards to ethical theories like egoism. Making the machine the moral agent has the advantage of objectivity where multiple people are concerned, but makes it harder to assign blame for its actions - a machine does not care for imprisonment or even disassembly. A Luddite would say it has no incentive to do good to humanity. Of course, a deterministic machine does not need incentive at all, since it will always behave according to the theory it is running. This lack of fear or “personal interest” can be good, because it ensures objective reasoning and fair consideration of affected parties.

The paper is here.

Thursday, October 10, 2019

Our illusory sense of agency has a deeply important social purpose

<p>French captain Zinedine Zidane is sent off during the 2006 World Cup final in Germany. <em>Photo by Shaun Botterill/Getty</em></p>Chris Frith
aeon.com
Originally published September 22, 2019

Here are two excerpts:

We humans like to think of ourselves as mindful creatures. We have a vivid awareness of our subjective experience and a sense that we can choose how to act – in other words, that our conscious states are what cause our behaviour. Afterwards, if we want to, we might explain what we’ve done and why. But the way we justify our actions is fundamentally different from deciding what to do in the first place.

Or is it? Most of the time our perception of conscious control is an illusion. Many neuroscientific and psychological studies confirm that the brain’s ‘automatic pilot’ is usually in the driving seat, with little or no need for ‘us’ to be aware of what’s going on. Strangely, though, in these situations we retain an intense feeling that we’re in control of what we’re doing, what can be called a sense of agency. So where does this feeling come from?

It certainly doesn’t come from having access to the brain processes that underlie our actions. After all, I have no insight into the electrochemical particulars of how my nerves are firing or how neurotransmitters are coursing through my brain and bloodstream. Instead, our experience of agency seems to come from inferences we make about the causes of our actions, based on crude sensory data. And, as with any kind of perception based on inference, our experience can be tricked.

(cut)

These observations point to a fundamental paradox about consciousness. We have the strong impression that we choose when we do and don’t act and, as a consequence, we hold people responsible for their actions. Yet many of the ways we encounter the world don’t require any real conscious processing, and our feeling of agency can be deeply misleading.

If our experience of action doesn’t really affect what we do in the moment, then what is it for? Why have it? Contrary to what many people believe, I think agency is only relevant to what happens after we act – when we try to justify and explain ourselves to each other.

The info is here.

Monday, September 2, 2019

The Robotic Disruption of Morality

John Danaher
Philosophical Disquisitions
Originally published August 2, 2019

Here is an excerpt:

2. The Robotic Disruption of Human Morality

From my perspective, the most interesting aspect of Tomasello’s theory is the importance he places on the second personal psychology (an idea he takes from the philosopher Stephen Darwall). In essence, what he is arguing is that all of human morality — particularly the institutional superstructure that reinforces it — is premised on how we understand those with whom we interact. It is because we see them as intentional agents, who experience and understand the world in much the same way as we do, that we start to sympathise with them and develop complex beliefs about what we owe each other. This, in turn, was made possible by the fact that humans rely so much on each other to get things done.

This raises the intriguing question: what happens if we no longer rely on each other to get things done? What if our primary collaborative and cooperative partners are machines and not our fellow human beings? Will this have some disruptive impact on our moral systems?

The answer to this depends on what these machines are or, more accurately, what we perceive them to be. Do we perceive them to be intentional agents just like other human beings or are they perceived as something else — something different from what we are used to? There are several possibilities worth considering. I like to think of these possibilities as being arranged along a spectrum that classifies robots/AIs according to how autonomous or tool-like they perceived to be.

At one extreme end of the spectrum we have the perception of robots/AIs as tools, i.e. as essentially equivalent to hammers and wheelbarrows. If we perceive them to be tools, then the disruption to human morality is minimal, perhaps non-existent. After all, if they are tools then they are not really our collaborative partners; they are just things we use. Human actors remain in control and they are still our primary collaborative partners. We can sustain our second personal morality by focusing on the tool users and not the tools.

The blog post is here.

Saturday, May 11, 2019

Free Will, an Illusion? An Answer from a Pragmatic Sentimentalist Point of View

Maureen Sie
Appears in : Caruso, G. (ed.), June 2013, Exploring the Illusion of Free Will and Moral Responsibility, Rowman & Littlefield.

According to some people, diverse findings in the cognitive and neurosciences suggest that free will is an illusion: We experience ourselves as agents, but in fact our brains decide, initiate, and judge before ‘we’ do (Soon, Brass, Heinze and Haynes 2008; Libet and Gleason 1983). Others have replied that the distinction between ‘us’ and ‘our brains’ makes no sense (e.g., Dennett 2003)  or that scientists misperceive the conceptual relations that hold between free will and responsibility (Roskies 2006). Many others regard the neuro-scientific findings as irrelevant to their views on free will. They do not believe that determinist processes are incompatible with free will to begin with, hence, do not understand why deterministic processes in our brain would be (see Sie and Wouters 2008, 2010). That latter response should be understood against the background of the philosophical free will discussion. In philosophy, free will is traditionally approached as a metaphysical problem, one that needs to be dealt with in order to discuss the legitimacy of our practices of responsibility. The emergence of our moral practices is seen as a result of the assumption that we possess free will (or some capacity associated with it) and the main question discussed is whether that assumption is compatible with determinism.  In this chapter we want to steer clear from this 'metaphysical' discussion.

The question we are interested in in this chapter, is whether the above mentioned scientific findings are relevant to our use of the concept of free will when that concept is approached from a different angle. We call this different angle the 'pragmatic sentimentalist'-approach to free will (hereafter the PS-approach).  This approach can be traced back to Peter F. Strawson’s influential essay “Freedom and Resentment”(Strawson 1962).  Contrary to the metaphysical approach, the PS-approach does not understand free will as a concept that somehow precedes our moral practices. Rather it is assumed that everyday talk of free will naturally arises in a practice that is characterized by certain reactive attitudes that we take towards one another. This is why it is called 'sentimentalist.' In this approach, the practical purposes of the concept of free will are put central stage. This is why it is called 'pragmatist.'

A draft of the book chapter can be downloaded here.

Saturday, April 27, 2019

When Would a Robot Have Free Will?

Eddy Nahmias
The NeuroEthics Blog
Originally posted April 1, 2019

Here are two excerpts:

Joshua Shepherd (2015) had found evidence that people judge humanoid robots that behave like humans and are described as conscious to be free and responsible more than robots that carry out these behaviors without consciousness. We wanted to explore what sorts of consciousness influence attributions of free will and moral responsibility—i.e., deserving praise and blame for one’s actions. We developed several scenarios describing futuristic humanoid robots or aliens, in which they were described as either having or as lacking: conscious sensations, conscious emotions, and language and intelligence. We found that people’s attributions of free will generally track their attributions of conscious emotions more than attributions of conscious sensory experiences or intelligence and language. Consistent with this, we also found that people are more willing to attribute free will to aliens than robots, and in more recent studies, we see that people also attribute free will to many animals, with dolphins and dogs near the levels attributed to human adults.

These results suggest two interesting implications. First, when philosophers analyze free will in terms of the control required to be morally responsible—e.g., being ‘reasons-responsive’—they may be creating a term of art (perhaps a useful one). Laypersons seem to distinguish the capacity to have free will from the capacities required to be responsible. Our studies suggest that people may be willing to hold intelligent but non-conscious robots or aliens responsible even when they are less willing to attribute to them free will.

(cut)

A second interesting implication of our results is that many people seem to think that having a biological body and conscious feelings and emotions are important for having free will. The question is: why? Philosophers and scientists have often asserted that consciousness is required for free will, but most have been vague about what the relationship is. One plausible possibility we are exploring is that people think that what matters for an agent to have free will is that things can really matter to the agent. And for anything to matter to an agent, she has to be able to care—that is, she has to have foundational, intrinsic motivations that ground and guide her other motivations and decisions.

The info is here.

Friday, January 25, 2019

Decision-Making and Self-Governing Systems

Adina L. Roskies
Neuroethics
October 2018, Volume 11, Issue 3, pp 245–257

Abstract

Neuroscience has illuminated the neural basis of decision-making, providing evidence that supports specific models of decision-processes. These models typically are quite mechanical, the realization of abstract mathematical “diffusion to bound” models. While effective decision-making seems to be essential for sophisticated behavior, central to an account of freedom, and a necessary characteristic of self-governing systems, it is not clear how the simple models neuroscience inspires can underlie the notion of self-governance. Drawing from both philosophy and neuroscience I explore ways in which the proposed decision-making architectures can play a role in systems that can reasonably be thought of as “self-governing”.

Here is an excerpt:

The importance of prospection for self-governance cannot be underestimated. One example in which it promises to play an important role is in the exercise of and failures of self-control. Philosophers have long been puzzled by the apparent possibility of akrasia or weakness of will: choosing to act in ways that one judges not to be in one’s best interest. Weakness of will is thought to be an example of irrational choice. If one’s theory of choice is that one always decides to pursue the option that has the highest value, and that it is rational to choose what one most values, it is hard to explain irrational choices. Apparent cases of weakness of will would really be cases of mistaken valuation: overvaluing an option that is in fact not the most valuable option. And indeed, if one cannot rationally criticize the strength of desires (see Hume’s famous observation that “it is not against reason that I should prefer the destruction of half the world to the pricking of my little finger”), we cannot explain irrational choice.

The article is here.