Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Virtue Ethics. Show all posts
Showing posts with label Virtue Ethics. Show all posts

Tuesday, May 21, 2024

Technology and the Situationist Challenge to Virtue Ethics

Tollon, F.
Sci Eng Ethics 30, 10 (2024).


In this paper, I introduce a “promises and perils” framework for understanding the “soft” impacts of emerging technology, and argue for a eudaimonic conception of well-being. This eudaimonic conception of well-being, however, presupposes that we have something like stable character traits. I therefore defend this view from the “situationist challenge” and show that instead of viewing this challenge as a threat to well-being, we can incorporate it into how we think about living well with technology. Human beings are susceptible to situational influences and are often unaware of the ways that their social and technological environment influence not only their ability to do well, but even their ability to know whether they are doing well. Any theory that attempts to describe what it means for us to be doing well, then, needs to take these contextual features into account and bake them into a theory of human flourishing. By paying careful attention to these contextual factors, we can design systems that promote human flourishing.

Here is my summary:

The paper examines how technological environments can undermine the development of virtuous character traits by shaping situational factors that influence moral behavior, posing a challenge to virtue ethics.

The Situationist critique argues that character traits are less stable and predictive of behavior than virtue ethics assumes. Instead, situational factors like social pressure and environmental cues often have a stronger influence on moral actions.

The authors argue that many modern technologies, from social media to surveillance systems, create situational contexts that can override or undermine the development of virtuous character. For example, technologies that increase social monitoring and evaluation may inhibit moral courage.

They suggest that virtues like honesty, compassion, and integrity may be more difficult to cultivate in technological environments that emphasize efficiency, productivity, and conformity over moral development.

The paper calls for virtue ethicists to grapple with how emerging technologies shape moral behavior, and to develop new approaches that account for the powerful situational influences created by technological systems.

In summary, this research highlights how the Situationist critique poses a significant challenge to traditional virtue ethics by demonstrating how technological environments can undermine the development of stable moral character, requiring new ethical frameworks to address the situational factors shaping human behavior.

Wednesday, June 14, 2023

Can Robotic AI Systems Be Virtuous and Why Does This Matter?

Constantinescu, M., Crisp, R. 
Int J of Soc Robotics 14, 
1547–1557 (2022).


The growing use of social robots in times of isolation refocuses ethical concerns for Human–Robot Interaction and its implications for social, emotional, and moral life. In this article we raise a virtue-ethics-based concern regarding deployment of social robots relying on deep learning AI and ask whether they may be endowed with ethical virtue, enabling us to speak of “virtuous robotic AI systems”. In answering this question, we argue that AI systems cannot genuinely be virtuous but can only behave in a virtuous way. To that end, we start from the philosophical understanding of the nature of virtue in the Aristotelian virtue ethics tradition, which we take to imply the ability to perform (1) the right actions (2) with the right feelings and (3) in the right way. We discuss each of the three requirements and conclude that AI is unable to satisfy any of them. Furthermore, we relate our claims to current research in machine ethics, technology ethics, and Human–Robot Interaction, discussing various implications, such as the possibility to develop Autonomous Artificial Moral Agents in a virtue ethics framework.


AI systems are neither moody nor dissatisfied, and they do not want revenge, which seems to be an important advantage over humans when it comes to making various decisions, including ethical ones. However, from a virtue ethics point of view, this advantage becomes a major drawback. For this also means that they cannot act out of a virtuous character, either. Despite their ability to mimic human virtuous actions and even to function behaviourally in ways equivalent to human beings, robotic AI systems cannot perform virtuous actions in accordance with virtues, that is, rightly or virtuously; nor for the right reasons and motivations; nor through phronesis take into account the right circumstances. And this has the consequence that AI cannot genuinely be virtuous, at least not with the current technological advances supporting their functional development. Nonetheless, it might well be that the more we come to know about AI, the less we know about its future.Footnote9 We therefore leave open the possibility of AI systems being virtuous in some distant future. This might, however, require some disruptive, non-linear evolution that includes, for instance, the possibility that robotic AI systems fully deliberate over their own versus others' goals and happiness and make their own choices and priorities accordinglyFootnote10. Indeed, to be a virtuous agent one needs to have the possibility to make mistakes, to reason over virtuous and vicious lines of action. But then this raises a different question: are we prepared to experience interaction with vicious robotic AI systems?

Saturday, March 19, 2022

The Content of Our Character

Brown, Teneille R.
Available at SSRN: https://ssrn.com/abstract=3665288


The rules of evidence assume that jurors can ignore most character evidence, but the data are clear. Jurors simply cannot *not* make character inferences. We are so driven to use character to assess blame, that we will spontaneously infer traits based on whatever limited information is available. In fact, within just 0.1 seconds of meeting someone, we have already decided if we think they are intelligent, trustworthy, likable, or kind--based just on the person’s face. This is a completely unregulated source of evidence, and yet it predicts teaching evaluations, electoral success, and even sentencing decisions. Given the pervasive and unintentional nature of “spontaneous trait inferences” (STIs), they are not susceptible to mitigation through jury instructions. However, recognizing that witnesses will be viewed as more or less trustworthy based just on their face, the rules of evidence must permit more character evidence, rather than less. This article harnesses undisputed findings from social psychology to propose a reversal of the ban on character evidence, in favor of a strong presumption against admissibility for immoral traits only. This removes a great deal from the rule’s crosshairs and re-tethers it to its normative roots. My proposal does not rely on the gossamer thin distinction between propensity and non-propensity uses, because once jurors hear about past act evidence, they will subconsciously draw an impermissible character inference. However, in some cases this might not be unfairly prejudicial, and may even be necessary for justice. The critical contribution of this article is that while shielding jurors from character evidence has noble origins, it also has unintended, negative consequences. When jurors cannot hear about how someone acted in the past, they will instead rely on immutable facial features—connected to racist, sexist and classist stereotypes—to draw character inferences that are even more inaccurate and unfair.

Here is a section

Moral Character Impacts Ratings of Intent

Previous models of intentionality held that for an act to be considered intentional, three things had to be present. The actor must have believed that an action would result in a particular outcome, desired this outcome, and had full awareness of his behavior. Research now challenges this account, “showing that individuals attribute intentions to others even (and largely) in the absence of these components.”  Even where an actor could not have acted otherwise, and thus was coerced to kill, study participants found the actor to be more morally responsible for an act if he “identified” with it, meaning that he desired the compelled outcome. These findings do not fit with our typical model of blame, which requires freedom to act in order to assign responsibility.  However, they make sense if we adopt a character-based approach to
blame. We are quick to infer a bad character and intent when there is very little evidence of it.  

An example of this is the hindsight bias called the “praise-blame asymmetry,” where people blame actors for accidental bad outcomes that they caused but did not intend, but do not praise people for accidental good outcomes that they likewise caused but did not intend. The classic example is the CEO who considers a development project that will increase profits. The CEO is agnostic to the project’s environmental effects and gives it the go-ahead. If the project’s outcome turns out to harm the environment, people say the CEO intended the bad outcome and they blame him for it. However, if instead the project turns out to benefit the environment, the CEO receives no praise. Our folk conception of intentionality is tied to morality and aversion to negative outcomes. If a foreseen outcome is negative, people will attribute intentionality to the decision-maker, but not if the foreseen outcome is positive; the overattribution of intent only seems to cut one way. Mens rea ascriptions are “sensitive to moral valence . . . . If the outcome is negative, foreknowledge standardly suffices for people to ascribe intentionality.” This effect has been found not just in laypeople, but also in French judges. If an action is considered immoral, then our emotional reaction to it can bias mental state ascriptions.

Friday, October 15, 2021

The Ethics of Sex Robots

Sterri, A. B., & Earp, B. D. (in press).
In C. Véliz (ed.), The Oxford Handbook of 
Digital Ethics. Oxford:  Oxford University Press.


What, if anything, is wrong with having sex with a robot? For the sake of this chapter, the authors  will  assume  that  sexbots  are  ‘mere’  machines  that are  reliably  identifiable  as such, despite  their  human-like  appearance  and  behaviour.  Under  these  stipulations,  sexbots themselves can no more be harmed, morally speaking, than your dishwasher. However, there may still be something wrong about the production, distribution,  and use of such sexbots. In this  chapter,  the  authors  examine  whether  sex  with robots  is  intrinsically  or  instrumentally wrong  and  critically  assess  different  regulatory  responses.  They  defend  a  harm  reduction approach to  sexbot  regulation,  analogous  to  the  approach that has  been  considered  in  other areas, concerning, for example, drugs and sex work.


Even  if  sexbots  never  become  sentient,  we  have  good  reasons  to  be  concerned with  their production, distribution, and use. Our seemingly  private activities have social meanings that we do not necessarily intend, but  which can be harmful to others. Sex  can both be  beautiful and  valuable—and  ugly  or  profoundly  harmful.  We  therefore  need  strong  ethical  norms  to guide human sexual behaviour, regardless of the existence of sexbots. Interaction with new technologies  could  plausibly  improve  our  sexual  relationships,  or  make things  worse  (see Nyholm et al. forthcoming, for a theoretical overview). In this chapter, we have explored some ways in which a harm reduction framework may have the potential to bring about the alleged benefits of sexbots with a minimum of associated harms. But whatever approach is taken, the goal should be to ensure that our relationships with robots conduce to, rather than detract from, the equitable flourishing of our fellow human beings.

Saturday, September 11, 2021

Virtues for Real-World Utilitarians

Schubert, S., & Caviola, L. (2021, August 3)


Utilitarianism says that we should maximize aggregate well-being, impartially considered. But utilitarians that try to apply this principle will encounter many psychological obstacles, ranging from selfishness to moral biases to limits to epistemic and instrumental rationality. In this chapter, we argue that utilitarians should cultivate a number of virtues that allow them to overcome the most important of these obstacles. We select virtues based on two criteria. First, the virtues should be impactful: they should greatly increase your impact (according to utilitarian standards), if you acquire them. Second, the virtues should be acquirable: they should be psychologically realistic to acquire. Using these criteria, we argue that utilitarians should prioritize six virtues: moderate altruism, moral expansiveness, effectiveness-focus, truth-seeking, collaborativeness, and determination. Finally, we discuss how our suggested list of virtues compares with standard conceptions of utilitarianism, as well as with common sense morality.


We have suggested six virtues that utilitarians should cultivate to overcome psychological obstacles to utilitarianism and maximize their impact in the real world: moderate altruism, moral expansiveness, effectiveness-focus,  truth-seeking,  collaborativeness,  and  determination.  To  reiterate,  this  list  is tentative, and should be seen more as a starting point for further research than as a well-consolidated set of findings. It is plausible that some of our suggested virtues should be refined, and that we should add further  virtues  to  the  list.  We  hope  that  it  should  inspire  a  debate  among  philosophers  and psychologists about what virtues utilitarians should prioritize the most.

Tuesday, May 19, 2020

Uncovering the moral heuristics of altruism: A philosophical scale

Friedland J, Emich K, Cole BM (2020)
PLoS ONE 15(3): e0229124.


Extant research suggests that individuals employ traditional moral heuristics to support their observed altruistic behavior; yet findings have largely been limited to inductive extrapolation and rely on relatively few traditional frames in so doing, namely, deontology in organizational behavior and virtue theory in law and economics. Given that these and competing moral frames such as utilitarianism can manifest as identical behavior, we develop a moral framing instrument—the Philosophical Moral-Framing Measure (PMFM)—to expand and distinguish traditional frames associated and disassociated with observed altruistic behavior. The validation of our instrument based on 1015 subjects in 3 separate real stakes scenarios indicates that heuristic forms of deontology, virtue-theory, and utilitarianism are strongly related to such behavior, and that egoism is an inhibitor. It also suggests that deontic and virtue-theoretical frames may be commonly perceived as intertwined and opens the door for new research on self-abnegation, namely, a perceived moral obligation toward suffering and self-denial. These findings hold the potential to inform ongoing conversations regarding organizational citizenship and moral crowding out, namely, how financial incentives can undermine altruistic behavior.

The research is here.

Monday, August 19, 2019

The evolution of moral cognition

Leda Cosmides, Ricardo Guzmán, and John Tooby
The Routledge Handbook of Moral Epistemology - Chapter 9

1. Introduction

Moral concepts, judgments, sentiments, and emotions pervade human social life. We consider certain actions obligatory, permitted, or forbidden, recognize when someone is entitled to a resource, and evaluate character using morally tinged concepts such as cheater, free rider, cooperative, and trustworthy. Attitudes, actions, laws, and institutions can strike us as fair, unjust, praiseworthy, or punishable: moral judgments. Morally relevant sentiments color our experiences—empathy for another’s pain, sympathy for their loss, disgust at their transgressions—and our decisions are influenced by feelings of loyalty, altruism, warmth, and compassion.  Full blown moral emotions organize our reactions—anger toward displays of disrespect, guilt over harming those we care about, gratitude for those who sacrifice on our behalf, outrage at those who harm others with impunity. A newly reinvigorated field, moral psychology, is investigating the genesis and content of these concepts, judgments, sentiments, and emotions.

This handbook reflects the field’s intellectual diversity: Moral psychology has attracted psychologists (cognitive, social, developmental), philosophers, neuroscientists, evolutionary biologists,  primatologists, economists, sociologists, anthropologists, and political scientists.

The chapter can be found here.

Thursday, April 4, 2019

Confucian Ethics as Role-Based Ethics

A. T. Nuyen
International Philosophical Quarterly
Volume 47, Issue 3, September 2007, 315-328.


For many commentators, Confucian ethics is a kind of virtue ethics. However, there is enough textual evidence to suggest that it can be interpreted as an ethics based on rules, consequentialist as well as deontological. Against these views, I argue that Confucian ethics is based on the roles that make an agent the person he or she is. Further, I argue that in Confucianism the question of what it is that a person ought to do cannot be separated from the question of what it is to be a person, and that the latter is answered in terms of the roles that arise from the network of social relationships in which a person stands. This does not mean that Confucian ethics is unlike anything found in Western philosophy. Indeed, I show that many Western thinkers have advanced a view of ethics similar to the Confucian ethics as I interpret it.

The info is here.

Wednesday, October 24, 2018

Chinese Ethics

Wong, David
The Stanford Encyclopedia of Philosophy (Fall 2018 Edition)

The tradition of Chinese ethical thought is centrally concerned with questions about how one ought to live: what goes into a worthwhile life, how to weigh duties toward family versus duties toward strangers, whether human nature is predisposed to be morally good or bad, how one ought to relate to the non-human world, the extent to which one ought to become involved in reforming the larger social and political structures of one’s society, and how one ought to conduct oneself when in a position of influence or power. The personal, social, and political are often intertwined in Chinese approaches to the subject. Anyone who wants to draw from the range of important traditions of thought on this subject needs to look seriously at the Chinese tradition. The canonical texts of that tradition have been memorized by schoolchildren in Asian societies for hundreds of years, and at the same time have served as objects of sophisticated and rigorous analysis by scholars and theoreticians rooted in widely variant traditions and approaches. This article will introduce ethical issues raised by some of the most influential texts in Confucianism, Mohism, Daoism, Legalism, and Chinese Buddhism.

The info is here.

Tuesday, November 21, 2017

What The Good Place Can Teach You About Morality

Patrick Allan
Originally posted November 6, 2017

Here is an excerpt:

Doing “Good” Things Doesn’t Necessarily Make You a Good Person

In The Good Place, the version of the afterlife you get sent to is based on a complicated point system. Doing “good” deeds earns you a certain number of positive points, and doing “bad” things will subtract them. Your point total when you die is what decides where you’ll go. Seems fair, right?

Despite the fact The Good Place makes life feel like a point-based videogame, we quickly learn morality isn’t as black and white as positive points and negative points. At one point, Eleanor tries to rack up points by holding doors for people; an action worth 3 points a pop. To put that in perspective, her score is -4,008 and she needs to meet the average of 1,222,821. It would take her a long time to get there but it’s one way to do it. At least, it would be if it worked. She quickly learns after awhile that she didn’t earn any points because she’s not actually trying to be nice to people. Her only goal is to rack up points so she can stay in The Good Place, which is an inherently selfish reason. The situation brings up a valid question: are “good” things done for selfish reasons still “good” things?

I don’t want to spoil too much, but as the series goes on, we see this question asked time and time again with each of its characters. Chidi may have spent his life studying moral ethics, but does knowing everything about pursuing “good” mean you are? Tahani spent her entire life as a charitable philanthropist, but she did it all for the questionable pursuit of finally outshining her near-perfect sister. She did a lot of good, but is she “good?” It’s something to consider yourself as you go about your day. Try to do “good” things, but ask yourself every once in awhile who those “good” things are really for.

The article is here.

Note: I really enjoy watching The Good Place.  Very clever. 

My spoiler: I think Michael is supposed to be in The Good Place too, not really the architect.

Friday, November 3, 2017

A fundamental problem with Moral Enhancement

Joao Fabiano
Practical Ethics
Originally posted October 13, 2017

Moral philosophers often prefer to conceive thought experiments, dilemmas and problem cases of single individuals who make one-shot decisions with well-defined short-term consequences. Morality is complex enough that such simplifications seem justifiable or even necessary for philosophical reflection.  If we are still far from consensus on which is the best moral theory or what makes actions right or wrong – or even if such aspects should be the central problem of moral philosophy – by considering simplified toy scenarios, then introducing group or long-term effects would make matters significantly worse. However, when it comes to actually changing human moral dispositions with the use of technology (i.e., moral enhancement), ignoring the essential fact that morality deals with group behaviour with long-ranging consequences can be extremely risky. Despite those risks, attempting to provide a full account of morality in order to conduct moral enhancement would be both simply impractical as well as arguably risky. We seem to be far away from such account, yet there are pressing current moral failings, such as the inability for proper large-scale cooperation, which makes the solution to present global catastrophic risks, such as global warming or nuclear war, next to impossible. Sitting back and waiting for a complete theory of morality might be riskier than attempting to fix our moral failing using incomplete theories. We must, nevertheless, proceed with caution and an awareness of such incompleteness. Here I will present several severe risks from moral enhancement that arise from focusing on improving individual dispositions while ignoring emergent societal effects and point to tentative solutions to those risks. I deem those emergent risks fundamental problems both because they lie at the foundation of the theoretical framework guiding moral enhancement – moral philosophy – and because they seem, at the time, inescapable; my proposed solution will aim at increasing awareness of such problems instead of directly solving them.

The article is here.

Sunday, October 29, 2017

Courage and Compassion: Virtues in Caring for So-Called “Difficult” Patients

Michael Hawking, Farr A. Curlin, and John D. Yoon
AMA Journal of Ethics. April 2017, Volume 19, Number 4: 357-363.


What, if anything, can medical ethics offer to assist in the care of the “difficult” patient? We begin with a discussion of virtue theory and its application to medical ethics. We conceptualize the “difficult” patient as an example of a “moral stress test” that especially challenges the physician’s character, requiring the good physician to display the virtues of courage and compassion. We then consider two clinical vignettes to flesh out how these virtues might come into play in the care of “difficult” patients, and we conclude with a brief proposal for how medical educators might cultivate these essential character traits in physicians-in-training.

Here is an excerpt:

To give a concrete example of a virtue that will be familiar to anyone in medicine, consider the virtue of temperance. A temperate person exhibits appropriate self-control or restraint. Aristotle describes temperance as a mean between two extremes—in the case of eating, an extreme lack of temperance can lead to morbid obesity and its excess to anorexia. Intemperance is a hallmark of many of our patients, particularly among those with type 2 diabetes, alcoholism, or cigarette addiction. Clinicians know all too well the importance of temperance because they see the results for human beings who lack it—whether it be amputations and dialysis for the diabetic patient; cirrhosis, varices, and coagulopathy for the alcoholic patient; or chronic obstructive pulmonary disease and lung cancer for the lifelong smoker. In all of these cases, intemperance inhibits a person’s ability to flourish. These character traits do, of course, interact with social, cultural, and genetic factors in impacting an individual’s health, but a more thorough exploration of these factors is outside the scope of this paper.

The article is here.

Friday, October 20, 2017

A virtue ethics approach to moral dilemmas in medicine

P Gardiner
J Med Ethics. 2003 Oct; 29(5): 297–302.


Most moral dilemmas in medicine are analysed using the four principles with some consideration of consequentialism but these frameworks have limitations. It is not always clear how to judge which consequences are best. When principles conflict it is not always easy to decide which should dominate. They also do not take account of the importance of the emotional element of human experience. Virtue ethics is a framework that focuses on the character of the moral agent rather than the rightness of an action. In considering the relationships, emotional sensitivities, and motivations that are unique to human society it provides a fuller ethical analysis and encourages more flexible and creative solutions than principlism or consequentialism alone. Two different moral dilemmas are analysed using virtue ethics in order to illustrate how it can enhance our approach to ethics in medicine.

A pdf download of the article can be found here.

Note from John: This article is interesting for a myriad of reasons. For me, we ethics educators have come a long way in 14 years.

Wednesday, June 14, 2017

Should We Outsource Our Moral Beliefs to Others?

Grace Boey
3 Quarks Daily
Originally posted May 29, 2017

Here is an excerpt:

Setting aside the worries above, there is one last matter that many philosophers take to be the most compelling candidate for the oddity of outsourcing our moral beliefs to others. As moral agents, we’re interested in more than just accumulating as many true moral beliefs as possible, such as ‘abortion is permissible’, or ‘killing animals for sport is wrong’. We also value things such as developing moral understanding, cultivating virtuous characters, having appropriate emotional reactions, and the like. Although moral deference might allow us to acquire bare moral knowledge from others, it doesn’t allow us to reflect or cultivate these other moral goods which are central to our moral identity.

Consider the value we place on understanding why we think our moral beliefs are true. Alison Hills notes that pure moral deference can’t get us to such moral understanding. When Bob defers unquestioningly to Sally’s judgment that abortion is morally permissible, he lacks an understanding of why this might be true. Amongst other things, this prevents Bob from being able to articulate, in his own words, the reasons behind this claim. This seems strange enough in itself, and Hills argues for at least two reasons why Bob’s situation is a bad one. For one, Bob’s lack of moral understanding prevents him from acting in a morally worthy way. Bob wouldn’t deserve any moral praise for, say, shutting down someone who harasses women who undergo the procedure.

Moreover, Bob’s lack of moral understanding seems to reflect a lack of good moral character, or virtue. Bob’s belief that ‘late-term abortion is permissible’ isn’t integrated with the rest of his thoughts, motivations, emotions, and decisions. Moral understanding, of course, isn’t all that matters for virtue and character. But philosophers who disagree with Hills on this point, like Robert Howell and Errol Lord, also note that moral deference reflects a lack of virtue and character in other ways, and can prevent the cultivation of these traits.

The article is here.

Tuesday, February 7, 2017

Business Leaders Get an ‘F’ in Ethics, Yet Again

Bruce Weinstein
Updated: Jan 09, 2016 

Here is an excerpt:

Business ethics can be improved

Public perception is malleable, so there is no reason why business executives have to remain stuck in the bottom of the Gallup poll. I propose the following four strategies for businesses that want to be regarded as honest and trustworthy:

Publicize your values. It never ceases to amaze me how few businesses list their company’s values and ethical commitments on their websites. This is the first Call to Action that I give businesses that hire me as a consultant: put your organization’s mission statement, code of ethics, and core values on the home page where they can be readily accessed.

Hire for character. The values and ethical standards you post on your website don’t mean anything if they’re not embodied by your employees. You understandably devote a lot of energy, time, and resources to hiring people who are knowledgeable and skilled. Isn’t it at least as important to hire people who are consistently honest, accountable, loyal, and fair—that is, men and women of high character?

Fire for character. Just as it’s crucial to bring high-character people into your organization, so too is it to get rid of those who don’t share your organization’s values. No matter how much the senior vice president of marketing knows about his or her field, if he or she has played fast and loose with the truth or hasn’t honored commitments to clients, why keep him or her on the payroll?

Reward excellence. I recently spoke at a Fortune 100 company on the day when five employees who embodied the company’s values were flown in to receive a prestigious award and a handsome bonus. One young man had found a $15,000 diamond ring in his store’s parking lot and had gone to considerable lengths to track down the owner. Imagine how the customer felt when her ring was returned. And imagine the positive word-of-mouth she gave the company.

Tuesday, December 27, 2016

Artificial moral agents: creative, autonomous and social. An approach based on evolutionary computation

Ioan Muntean and Don Howard
Frontiers in Artificial Intelligence and Applications
Volume 273: Sociable Robots and the Future of Social Relations


In this paper we propose a model of artificial normative agency that accommodates some social competencies that we expect from artificial moral agents. The artificial moral agent (AMA) discussed here is based on two components: (i) a version of virtue ethics of human agents (VE) adapted to artificial agents, called here “virtual virtue ethics” (VVE); and (ii) an implementation based on evolutionary computation (EC), more concretely genetic algorithms. The reasons to choose VVE and EC are related to two elements that are, we argue, central to any approach to artificial morality: autonomy and creativity. The greater the autonomy an artificial agent has, the more it needs moral standards. In the virtue ethics, each agent builds her own character in time; creativity comes in degrees as the individual becomes morally competent. The model of an autonomous and creative AMA thus implemented is called GAMA= Genetic(-inspired) Autonomous Moral Agent. First, unlike the majority of other implementations of machine ethics, our model is more agent-centered, than action-centered; it emphasizes the developmental and behavioral aspects of the ethical agent. Second, in our model, the AMA does not make decisions exclusively and directly by following rules or by calculating the best outcome of an action. The model incorporates rules as initial data (as the initial population of the genetic algorithms) or as correction factors, but not as the main structure of the algorithm. Third, our computational model is less conventional, or at least it does not fall within the Turing tradition in computation. Genetic algorithms are excellent searching tools that avoid local minima and generate solutions based on previous results. In the GAMA model, only prospective at this stage, the VVE approach to ethics is better implemented by EC. Finally, the GAMA agents can display sociability through competition among the best moral actions and the desire to win the competition. Both VVE and EC are more adequate to a “social approach” to AMA when compared to the standard approaches. The GAMA is more promising a “moral and social artificial agent”.

The article is here.

Thursday, September 22, 2016

Does Situationism Threaten Free Will and Moral Responsibility?

Michael McKenna and Brandon Warmke
Journal of Moral Psychology


The situationist movement in social psychology has caused a considerable stir in philosophy over the last fifteen years. Much of this was prompted by the work of the philosophers Gilbert Harman (1999) and John Doris (2002). Both contended that familiar philosophical assumptions about the role of character in the explanation of human action were not supported by the situationists experimental results. Most of the ensuing philosophical controversy has focused upon issues related to moral psychology and ethical theory, especially virtue ethics. More recently, the influence of situationism has also given rise to further questions regarding free will and moral responsibility (e.g., Brink 2013; Ciurria 2013; Doris 2002; Mele and Shepherd 2013; Miller 2016; Nelkin 2005; Talbert 2009; and Vargas 2013b). In this paper, we focus just upon these latter issues. Moreover, we focus primarily on reasons-responsive theories. There is cause for concern that a range of situationist findings are in tension with the sort of reasons-responsiveness putatively required for free will and moral responsibility. Here, we develop and defend a response to the alleged situationist threat to free will and moral responsibility that we call pessimistic realism. We conclude on an optimistic note, however, exploring the possibility of strengthening our agency in the face of situational influences.

The article is here.

Saturday, August 27, 2016

Empirical Approaches to Moral Character

Miller, Christian B.
The Stanford Encyclopedia of Philosophy (Fall 2016 Edition),
Edward N. Zalta (ed.), forthcoming

The turn of the century saw a significant increase in the amount of attention being paid by philosophers to empirical issues about moral character. Dating back at least to Plato and Aristotle in the West, and Confucius in the East, philosophers have traditionally drawn on empirical data to some extent in their theorizing about character. One of the main differences in recent years has been the source of this empirical data, namely the work of social and personality psychologists on morally relevant thought and action.

This entry briefly examines four recent empirical approaches to moral character. It will draw on the psychology literature where appropriate, but the main focus will be on the significance of that work for philosophers interested in better understanding moral character. The four areas are situationism, the CAPS model, the Big Five model, and the VIA. The remainder of this entry devotes a section to each of them.

The entry is here.

Monday, February 22, 2016

Morality is a muscle. Get to the gym.

Pascal-Emmanuel Gobry
The Week
Originally published January 18, 2016

Here is an excerpt:

Take the furor over "trigger warnings" in college classes and textbooks. One side believes that in order to protect the sensitivities of some students, professors or writers should warn readers or students about some at the beginning of an article or course about controversial topics. Another side says that if someone can't handle rough material, then he can stop reading or step out of the room, and that trigger warnings are an unconscionable affront to freedom of thought. Interestingly, both schools clearly believe that there is one moral stance which takes the form of a rule that should be obeyed always and everywhere. Always and everywhere we should have trigger warnings to protect people's sensibilities, or always and everywhere we should not.

Both sides need a lecture in virtue ethics.

If I try to stretch my virtue of empathy, it doesn't seem at all absurd to me to imagine that, say, a young woman who has been raped might be made quite uncomfortable by a class discussion of rape in literature, and that this is something to which we should be sensitive. But the trigger warning people maybe should think more about the moral imperative to develop the virtue of courage, including intellectual courage. Then it seems to me that if you just put aside grand moral questions about freedom of inquiry, simple basic human courtesy would mean a professor would try to take account a trauma victim's sensibilities while teaching sensitive material, and students would understand that part of the goal of a college class is to challenge them. We don't need to debate universal moral values, we just need to be reminded to exercise virtue more.

The article is here.

Thursday, November 5, 2015

A Code of Ethics for Health Care Ethics Consultants

Anita J. Tarzian & Lucia D. Wocial
American Journal of Bioethics 15 (5):38-51 (2015)


For decades a debate has played out in the literature about who bioethicists are, what they do, whether they can be considered professionals qua bioethicists, and, if so, what professional responsibilities they are called to uphold. Health care ethics consultants are bioethicists who work in health care settings. They have been seeking guidance documents that speak to their special relationships/duties toward those they serve. By approving a Code of Ethics and Professional Responsibilities for Health Care Ethics Consultants, the American Society for Bioethics and Humanities (ASBH) has moved the professionalization debate forward in a significant way. This first code of ethics focuses on individuals who provide health care ethics consultation (HCEC) in clinical settings. The evolution of the code's development, implications for the field of HCEC and bioethics, and considerations for future directions are presented here.

The entire paper is here.