Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Monday, July 12, 2021

Workplace automation without achievement gaps: a reply to Danaher and Nyholm

Tigard, D.W. 
AI Ethics (2021). 
https://doi.org/10.1007/s43681-021-00064-1

Abstract

In a recent article in this journal, John Danaher and Sven Nyholm raise well-founded concerns that the advances in AI-based automation will threaten the values of meaningful work. In particular, they present a strong case for thinking that automation will undermine our achievements, thereby rendering our work less meaningful. It is also claimed that the threat to achievements in the workplace will open up ‘achievement gaps’—the flipside of the ‘responsibility gaps’ now commonly discussed in technology ethics. This claim, however, is far less worrisome than the general concerns for widespread automation, namely because it rests on several conceptual ambiguities. With this paper, I argue that although the threat to achievements in the workplace is problematic and calls for policy responses of the sort Danaher and Nyholm outline, when framed in terms of responsibility, there are no ‘achievement gaps’.

From the Conclusion

In closing, it is worth stopping to ask: Who exactly is the primary subject of “harm” (broadly speaking) in the supposed gap scenarios? Typically, in cases of responsibility gaps, the harm is seen as falling upon the person inclined to respond (usually with blame) and finding no one to respond to. This is often because they seek apologies or some sort of remuneration, and as we can imagine, it sets back their interests when such demands remain unfulfilled. But what about cases of achievement gaps? If we want to draw truly close analogies between the two scenarios, we would consider the subject of harm to be the person inclined to respond with praise and finding no one to praise. And perhaps there is some degree of disappointment here, but it hardly seems to be a worrisome kind of experience for that person. With this in mind, we might say there is yet another mismatch between responsibility gaps and achievement gaps. Nevertheless, on the account of Danaher and Nyholm, the harm is seen as falling upon the humans who miss out on achieving something in the workplace. But on that picture, we run into a sort of non-identity problem—for as soon as we identify the subjects of this kind of harm, we thereby affirm that it is not fitting to praise them for the workplace achievement, and so they cannot really be harmed in this way.

Sunday, July 11, 2021

It just feels right: an account of expert intuition

Fridland, E., & Stichter, M. 
Synthese (2020). 
https://doi.org/10.1007/s11229-020-02796-9

Abstract

One of the hallmarks of virtue is reliably acting well. Such reliable success presupposes that an agent (1) is able to recognize the morally salient features of a situation, and the appropriate response to those features and (2) is motivated to act on this knowledge without internal conflict. Furthermore, it is often claimed that the virtuous person can do this (3) in a spontaneous or intuitive manner. While these claims represent an ideal of what it is to have a virtue, it is less clear how to make good on them. That is, how is it actually possible to spontaneously and reliably act well? In this paper, we will lay out a framework for understanding how it is that one could reliably act well in an intuitive manner. We will do this by developing the concept of an action schema, which draws on the philosophical and psychological literature on skill acquisition and self-regulation. In short, we will give an account of how self-regulation, grounded in skillful structures, can allow for the accurate intuitions and flexible expertise required for virtue. While our primary goal in this paper is to provide a positive theory of how virtuous intuitions might be accounted for, we also take ourselves to be raising the bar for what counts as an explanation of reliable and intuitive action in general.

Conclusion

By thinking of skill and expertise as sophisticated forms of self-regulation, we are able to get a handle on intuition, generally, and on the ways in which reliably accurate intuition may develop in virtue, specifically. This gives us a way of explaining both the accuracy and immediacy of the virtuous person’s perception and intuitive responsiveness to a situation and it also gives us further reason to prefer a virtue as skill account of virtue. Moreover, such an approach gives us the resources to explain with some rigor and precision, the ways in which expert intuition can be accounted for, by appeal to action schemas. Lastly, our approach provides reason to think that expert intuition in the realm of virtue can indeed develop over time and with practice in a way that is flexible, controlled and intelligent. It lends credence to the view that virtue is learned and that we can act reliably and well by grounding our actions in expert intuition.

Saturday, July 10, 2021

Is Burnout Depression by Another Name?

Bianchi R, Verkuilen J, Schonfeld IS, et al. 
Clinical Psychological Science. March 2021. 
doi:10.1177/2167702620979597

Abstract

There is no consensus on whether burnout constitutes a depressive condition or an original entity requiring specific medical and legal recognition. In this study, we examined burnout–depression overlap using 14 samples of individuals from various countries and occupational domains (N = 12,417). Meta-analytically pooled disattenuated correlations indicated (a) that exhaustion—burnout’s core—is more closely associated with depressive symptoms than with the other putative dimensions of burnout (detachment and efficacy) and (b) that the exhaustion–depression association is problematically strong from a discriminant validity standpoint (r = .80). The overlap of burnout’s core dimension with depression was further illuminated in 14 exploratory structural equation modeling bifactor analyses. Given their consistency across countries, languages, occupations, measures, and methods, our results offer a solid base of evidence in support of the view that burnout problematically overlaps with depression. We conclude by outlining avenues of research that depart from the use of the burnout construct.

--------

In essence, the core feature of burnout is depression.  However, burnout is not as debilitating as depression.

Friday, July 9, 2021

Why It’s Time To Modernize Your Ethics Hotline

Claire Schmidt
Forbes.com
Originally posted 18 Jun 21

Traditional whistleblower hotlines are going to be a thing of the past.

They certainly served a purpose and pioneered a way for employees to report wrongdoing at their companies confidentially. But the reasons are stacking up against them as to why they’re no longer serving companies and employees in 2021. And if companies continue to use them, they need to realize that issues or concerns may go unreported because employees don’t want to use that channel to report.

After all, the function of a whistleblower hotline is to encourage employees to report any wrongdoing they see in the workplace through a confidential channel, which means that the channels for reporting should get an upgrade.

But there are deeper reasons why issues remain unreported — and it goes beyond just offering a hotline to use. Today, companies need to give their employees better ways to report wrongdoing, as well as tell them the value of why they should do so. Otherwise, companies won’t hear about the full extent of wrongdoing happening in the workplace, whatever channel they provide.

The Evolution Of Workplace Reporting Channels

Whistleblower or ethics hotlines were initially that: a phone number — because that was the technology at the time — that employees could anonymously call to report wrongdoings at a company. The Sarbanes-Oxley Act of 2002 mandated that companies set up a method for “the confidential, anonymous submission by employees of the issuer of concerns regarding questionable accounting or auditing matters.”

Thursday, July 8, 2021

Free Will and Neuroscience: Decision Times and the Point of No Return

Alfred Mele
In Free Will, Causality, & Neuroscience
Chapter 4

Here are some excerpts:

Decisions to do things, as I conceive of them, are momentary actions of forming an intention to do them. For example, to decide to flex my right wrist now is to perform a (nonovert) action of forming an intention to flex it now (Mele 2003, ch. 9). I believe that Libet understands decisions in the same way. Some of our decisions and intentions are for the nonimmediate future and others are not. I have an intention today to fly to Brussels three days from now, and I have an intention now to click my “save” button now. The former intention is aimed at action three days in the future. The latter intention is about what to do now. I call intentions of these kinds, respectively, distal and proximal intentions (Mele 1992, pp. 143–44, 158, 2009, p. 10), and I make the same distinction in the sphere of decisions to act. Libet studies proximal intentions (or decisions or urges) in particular.

(cut)

Especially in the case of the study now under discussion, readers unfamiliar with Libet-style experiments may benefit from a short description of my own experience as a participant in such an experiment (see Mele 2009, pp. 34–36). I had just three things to do: watch a Libet clock with a view to keeping track of when I first became aware of something like a proximal urge, decision, or intention to flex; flex whenever I felt like it (many times over the course of the experiment); and report, after each flex, where I believed the hand was on the clock at the moment of first awareness. (I reported this belief by moving a cursor to a point on the clock. The clock was very fast; it made a complete revolution in about 2.5 seconds.) Because I did not experience any proximal urges, decisions, or intentions to flex, I hit on the strategy of saying “now!” silently to myself just before beginning to flex. This is the mental event that I tried to keep track of with the assistance of the clock. I thought of the “now!” as shorthand for the imperative “flex now!” – something that may be understood as an expression of a proximal decision to flex.

Why did I say “now!” exactly when I did? On any given trial, I had before me a string of equally good moments for a “now!” – saying, and I arbitrarily picked one of the moments. 3 But what led me to pick the moment I picked? The answer offered by Schurger et al. is that random noise crossed a decision threshold then. And they locate the time of the crossing very close to the onset of muscle activity – about 100 ms before it (pp. E2909, E2912). They write: “The reason we do not experience the urge to move as having happened earlier than about 200 ms before movement onset [referring to Libet’s partipants’ reported W time] is simply because, at that time, the neural decision to move (crossing the decision threshold) has not yet been made” (E2910). If they are right, this is very bad news for Libet. His claim is that, in his experiments, decisions are made well before the average reported W time: −200 ms. (In a Libet-style experiment conducted by Schurger et al., average reported W time is −150 ms [p. E2905].) As I noted, if relevant proximal decisions are not made before W, Libet’s argument for the claim that they are made unconsciously fails.

Wednesday, July 7, 2021

“False positive” emotions, responsibility, and moral character

Anderson, R. A., et al.
Cognition
Volume 214, September 2021, 104770

Abstract

People often feel guilt for accidents—negative events that they did not intend or have any control over. Why might this be the case? Are there reputational benefits to doing so? Across six studies, we find support for the hypothesis that observers expect “false positive” emotions from agents during a moral encounter – emotions that are not normatively appropriate for the situation but still trigger in response to that situation. For example, if a person accidentally spills coffee on someone, most normative accounts of blame would hold that the person is not blameworthy, as the spill was accidental. Self-blame (and the guilt that accompanies it) would thus be an inappropriate response. However, in Studies 1–2 we find that observers rate an agent who feels guilt, compared to an agent who feels no guilt, as a better person, as less blameworthy for the accident, and as less likely to commit moral offenses. These attributions of moral character extend to other moral emotions like gratitude, but not to nonmoral emotions like fear, and are not driven by perceived differences in overall emotionality (Study 3). In Study 4, we demonstrate that agents who feel extremely high levels of inappropriate (false positive) guilt (e.g., agents who experience guilt but are not at all causally linked to the accident) are not perceived as having a better moral character, suggesting that merely feeling guilty is not sufficient to receive a boost in judgments of character. In Study 5, using a trust game design, we find that observers are more willing to trust others who experience false positive guilt compared to those who do not. In Study 6, we find that false positive experiences of guilt may actually be a reliable predictor of underlying moral character: self-reported predicted guilt in response to accidents negatively correlates with higher scores on a psychopathy scale.

General discussion

Collectively, our results support the hypothesis that false positive moral emotions are associated with both judgments of moral character and traits associated with moral character. We consistently found that observers use an agent's false positive experience of moral emotions (e.g., guilt, gratitude) to infer their underlying moral character, their social likability, and to predict both their future emotional responses and their future moral behavior. Specifically, we found that observers judge an agent who experienced “false positive” guilt (in response to an accidental harm) as a more moral person, more likeable, less likely to commit future moral infractions, and more trustworthy than an agent who experienced no guilt. Our results help explain the second “puzzle” regarding guilt for accidental actions (Kamtekar & Nichols, 2019). Specifically, one reason that observers may find an accidental agent less blameworthy, and yet still be wary if the agent does not feel guilt, is that such false positive guilt provides an important indicator of that agent's underlying character.

Tuesday, July 6, 2021

On the Origins of Diversity in Social Behavior

Young, L.J. & Zhang, Q.
Japanese Journal of Animal Psychology
2021.

Abstract

Here we discuss the origins of diversity in social behavior by highlighting research using the socially monogamous prairie vole. Prairie voles display a rich social behavioral repertoire involving pair bonding and consoling behavior that are not observed in typical laboratory species. Oxytocin and vasopressin play critical roles in regulating pair bonding and consoling behavior. Oxytocin and vasopressin receptors show remarkable diversity in expression patterns both between and within species. Receptor expression patterns are associated with species differences in social behaviors. Variations in receptor genes have been linked to individual variation in expression patterns. We propose that "evolvability" in the oxytocin and vasopressin receptor genes allows for the repurposing of ancient maternal and territorial circuits to give rise to novel social behaviors such as pair bonding, consoling and selective aggression. We further propose that the evolvability of these receptor genes is due to their transcriptional sensitivity to genomic variation. This model provides a foundation for investigating the molecular mechanisms giving rise to the remarkable diversity in social behaviors found in vertebrates.



While this hypothesis remains to be tested, we believe this transcriptional flexibility is key to the origin of diversity in social behavior, and enables rapid social behavioral adaptation through natural selection, and
contributes to the remarkable diversity in social and reproductive behaviors in the animal kingdom.


Monday, July 5, 2021

When Do Robots have Free Will? Exploring the Relationships between (Attributions of) Consciousness and Free Will

Nahmias, E., Allen, C. A., & Loveall, B.
In Free Will, Causality, & Neuroscience
Chapter 3

Imagine that, in the future, humans develop the technology to construct humanoid robots with very sophisticated computers instead of brains and with bodies made out of metal, plastic, and synthetic materials. The robots look, talk, and act just like humans and are able to integrate into human society and to interact with humans across any situation. They work in our offices and our restaurants, teach in our schools, and discuss the important matters of the day in our bars and coffeehouses. How do you suppose you’d respond to one of these robots if you were to discover them attempting to steal your wallet or insulting your friend? Would you regard them as free and morally responsible agents, genuinely deserving of blame and punishment?

If you’re like most people, you are more likely to regard these robots as having free will and being morally responsible if you believe that they are conscious rather than non-conscious. That is, if you think that the robots actually experience sensations and emotions, you are more likely to regard them as having free will and being morally responsible than if you think they simply behave like humans based on their internal programming but with no conscious experiences at all. But why do many people have this intuition? Philosophers and scientists typically assume that there is a deep connection between consciousness and free will, but few have developed theories to explain this connection. To the extent that they have, it’s typically via some cognitive capacity thought to be important for free will, such as reasoning or deliberation, that consciousness is supposed to enable or bolster, at least in humans. But this sort of connection between consciousness and free will is relatively weak. First, it’s contingent; given our particular cognitive architecture, it holds, but if robots or aliens could carry out the relevant cognitive capacities without being conscious, this would suggest that consciousness is not constitutive of, or essential for, free will. Second, this connection is derivative, since the main connection goes through some capacity other than consciousness. Finally, this connection does not seem to be focused on phenomenal consciousness (first-person experience or qualia), but instead, on access consciousness or self-awareness (more on these distinctions below).

From the Conclusion

In most fictional portrayals of artificial intelligence and robots (such as Blade Runner, A.I., and Westworld), viewers tend to think of the robots differently when they are portrayed in a way that suggests they express and feel emotions. No matter how intelligent or complex their behavior, they do not come across as free and autonomous until they seem to care about what happens to them (and perhaps others). Often this is portrayed by their showing fear of their own death or others, or expressing love, anger, or joy. Sometimes it is portrayed by the robots’ expressing reactive attitudes, such as indignation, or our feeling such attitudes towards them. Perhaps the authors of these works recognize that the robots, and their stories, become most interesting when they seem to have free will, and people will see them as free when they start to care about what happens to them, when things really matter to them, which results from their experiencing the actual (and potential) outcomes of their actions.


Sunday, July 4, 2021

Understanding Side-Effect Intentionality Asymmetries: Meaning, Morality, or Attitudes and Defaults?

Laurent SM, Reich BJ, Skorinko JLM. 
Personality and Social Psychology Bulletin. 
2021;47(3):410-425. 
doi:10.1177/0146167220928237

Abstract

People frequently label harmful (but not helpful) side effects as intentional. One proposed explanation for this asymmetry is that moral considerations fundamentally affect how people think about and apply the concept of intentional action. We propose something else: People interpret the meaning of questions about intentionally harming versus helping in fundamentally different ways. Four experiments substantially support this hypothesis. When presented with helpful (but not harmful) side effects, people interpret questions concerning intentional helping as literally asking whether helping is the agents’ intentional action or believe questions are asking about why agents acted. Presented with harmful (but not helpful) side effects, people interpret the question as asking whether agents intentionally acted, knowing this would lead to harm. Differences in participants’ definitions consistently helped to explain intentionality responses. These findings cast doubt on whether side-effect intentionality asymmetries are informative regarding people’s core understanding and application of the concept of intentional action.

From the Discussion

Second, questions about intentionality of harm may focus people on two distinct elements presented in the vignette: the agent’s  intentional action  (e.g., starting a profit-increasing program) and the harmful secondary outcome he knows this goal-directed action will cause. Because the concept of intentionality is most frequently applied to actions rather than consequences of actions (Laurent, Clark, & Schweitzer, 2015), reframing the question as asking about an intentional action undertaken with foreknowledge of harm has advantages. It allows consideration of key elements from the story and is responsive to what people may feel is at the heart of the question: “Did the chairman act intentionally, knowing this would lead to harm?” Notably, responses to questions capturing this idea significantly mediated intentionality responses in each experiment presented here, whereas other variables tested failed to consistently do so.