Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Emotions. Show all posts
Showing posts with label Emotions. Show all posts

Tuesday, December 3, 2019

A Constructionist Review of Morality and Emotions: No Evidence for Specific Links Between Moral Content and Discrete Emotions

Image result for moral emotionsCameron, C. D., Lindquist, K. A., & Gray K.
Pers Soc Psychol Rev. 
2015 Nov;19(4):371-94.
doi: 10.1177/1088868314566683.

Abstract

Morality and emotions are linked, but what is the nature of their correspondence? Many "whole number" accounts posit specific correspondences between moral content and discrete emotions, such that harm is linked to anger, and purity is linked to disgust. A review of the literature provides little support for these specific morality-emotion links. Moreover, any apparent specificity may arise from global features shared between morality and emotion, such as affect and conceptual content. These findings are consistent with a constructionist perspective of the mind, which argues against a whole number of discrete and domain-specific mental mechanisms underlying morality and emotion. Instead, constructionism emphasizes the flexible combination of basic and domain-general ingredients such as core affect and conceptualization in creating the experience of moral judgments and discrete emotions. The implications of constructionism in moral psychology are discussed, and we propose an experimental framework for rigorously testing morality-emotion links.

Thursday, November 21, 2019

A Sober Second Thought? A Pre-Registered Experiment on the Effects of Mindfulness Meditation on Political Tolerance

Michael Bang Petersen & Panagiotis Mitkidis
PsyArXiv
Originally posted October 20, 2019

Abstract

Mindfulness meditation is increasingly promoted as a tool to foster more inclusive and tolerant societies and, accordingly, meditation practice has been adopted in a number of public institutions including schools and legislatures. Here, we provide the first empirical test of the effects of mindfulness meditation on political and societal attitudes by examining whether completion in a 15-minute mindfulness meditation increases tolerance towards disliked groups relative to relevant control conditions. Analyses of data from a pilot experiment (N = 54) and a pre-registered experiment (N = 171) provides no evidence that mindfulness meditation increases political tolerance. Furthermore, exploratory analyses show that individual differences in trait mindfulness is not associated with differences in tolerance. These results suggest that there is reason to pause recommending mindfulness meditation as a way to achieve democratically desirable outcomes or, at least, that short-term meditation is not sufficient to generate these.

The research is here.

Tuesday, November 12, 2019

Errors in Moral Forecasting: Perceptions of Affect Shape the Gap Between Moral Behaviors and Moral Forecasts

Teper, R., Zhong, C.‐B., and Inzlicht, M. (2015)
Social and Personality Psychology Compass, 9, 1– 14,
doi: 10.1111/spc3.12154

Abstract

Within the past decade, the field of moral psychology has begun to disentangle the mechanics behind moral judgments, revealing the vital role that emotions play in driving these processes. However, given the well‐documented dissociation between attitudes and behaviors, we propose that an equally important issue is how emotions inform actual moral behavior – a question that has been relatively ignored up until recently. By providing a review of recent studies that have begun to explore how emotions drive actual moral behavior, we propose that emotions are instrumental in fueling real‐life moral actions. Because research examining the role of emotional processes on moral behavior is currently limited, we push for the use of behavioral measures in the field in the hopes of building a more complete theory of real‐life moral behavior.

Conclusion

Long gone are the days when emotion was written off as a distractor or a roadblock to effective moral decision making. There now exists a great deal of evidence bolstering the idea that emotions are actually necessary for initiating adaptive behavior (Bechara, 2004; Damasio, 1994; Panskepp & Biven, 2012). Furthermore, evidence from the field of moral psychology points to the fact that individuals rely quite heavily on emotional and intuitive processes when engaging in moral judgments (e.g. Haidt, 2001). However, up until recently, the playing field of moral psychology has been heavily dominated by research revolving around moral judgments alone, especially when investigating the role that emotions play in motivating moral decision-making.

A pdf can be downloaded here.

Effect of Psilocybin on Empathy and Moral Decision-Making

Thomas Pokorny, Katrin H Preller, & others
International Journal of Neuropsychopharmacology, 
Volume 20, Issue 9, September 2017, Pages 747–757
https://doi.org/10.1093/ijnp/pyx047

Abstract

Background
Impaired empathic abilities lead to severe negative social consequences and influence the development and treatment of several psychiatric disorders. Furthermore, empathy has been shown to play a crucial role in moral and prosocial behavior. Although the serotonin system has been implicated in modulating empathy and moral behavior, the relative contribution of the various serotonin receptor subtypes is still unknown.

Methods
We investigated the acute effect of psilocybin (0.215 mg/kg p.o.) in healthy human subjects on different facets of empathy and hypothetical moral decision-making using the multifaceted empathy test (n=32) and the moral dilemma task (n=24).

Results
Psilocybin significantly increased emotional, but not cognitive empathy compared with placebo, and the increase in implicit emotional empathy was significantly associated with psilocybin-induced changed meaning of percepts. In contrast, moral decision-making remained unaffected by psilocybin.

Conclusions
These findings provide first evidence that psilocybin has distinct effects on social cognition by enhancing emotional empathy but not moral behavior. Furthermore, together with previous findings, psilocybin appears to promote emotional empathy presumably via activation of serotonin 2A/1A receptors, suggesting that targeting serotonin 2A/1A receptors has implications for potential treatment of dysfunctional social cognition.

Monday, November 11, 2019

Incidental emotions in moral dilemmas: the influence of emotion regulation.

Raluca D. Szekely & Andrei C. Miu
Cogn Emot. 2015;29(1):64-75.
doi: 10.1080/02699931.2014.895300.

Abstract

Recent theories have argued that emotions play a central role in moral decision-making and suggested that emotion regulation may be crucial in reducing emotion-linked biases. The present studies focused on the influence of emotional experience and individual differences in emotion regulation on moral choice in dilemmas that pit harming another person against social welfare. During these "harm to save" moral dilemmas, participants experienced mostly fear and sadness but also other emotions such as compassion, guilt, anger, disgust, regret and contempt (Study 1). Fear and disgust were more frequently reported when participants made deontological choices, whereas regret was more frequently reported when participants made utilitarian choices. In addition, habitual reappraisal negatively predicted deontological choices, and this effect was significantly carried through emotional arousal (Study 2). Individual differences in the habitual use of other emotion regulation strategies (i.e., acceptance, rumination and catastrophising) did not influence moral choice. The results of the present studies indicate that negative emotions are commonly experienced during "harm to save" moral dilemmas, and they are associated with a deontological bias. By efficiently reducing emotional arousal, reappraisal can attenuate the emotion-linked deontological bias in moral choice.

General Discussion

Using H2S moral dilemmas, the present studies yielded three main findings: (1) a wide spectrum of emotions are experienced during these moral dilemmas, with self-focused emotions such as fear and sadness being the most common (Study 1); (2) there is a positive relation between emotional arousal during moral dilemmas and deontological choices (Studies 1 and 2); and (3) individual differences in reappraisal, but not other emotion regulation strategies (i.e., acceptance, rumination or catastrophising) are negatively associated with deontological choices and this effect is carried through emotional arousal (Study 2).

A pdf can be downloaded here.


Thursday, October 31, 2019

Bridging cognition and emotion in moral decision making: Role of emotion regulation

Raluca D. Szekely and Andrei C. Miu
In M. L. Bryant (Ed.): Handbook on Emotion Regulation: Processes,
Cognitive Effects and Social Consequences. Nova Science, New York

Abstract

In the last decades, the involvement of emotions in moral decision making was investigated using moral dilemmas in healthy volunteers, neuropsychological and psychiatric patients. Recent research characterized emotional experience in moral dilemmas and its association with deontological decisions. Moreover, theories debated the roles of emotion and reasoning in moral decision making and suggested that emotion regulation may be crucial in overriding emotion-driven deontological biases. After briefly introducing the reader to moral dilemma research and current perspectives on emotion and emotion-cognition interactions in this area, the present chapter reviews emerging evidence for emotion regulation in moral decision making. Inspired by recent advances in the field of emotion regulation, this chapter also highlights several avenues for future research on emotion regulation in moral psychology.

The book chapter can be downloaded here.

This is a good summary for those starting to learn about cognition, decision-making models, emotions, and morality.

Friday, October 25, 2019

Deciding Versus Reacting:Conceptions of Moral Judgment and the Reason-Affect Debate

Monin, B., Pizarro, D. A., & Beer, J. S. (2007).
Review of General Psychology, 11(2), 99–111.
https://doi.org/10.1037/1089-2680.11.2.99

Abstract

Recent approaches to moral judgment have typically pitted emotion against reason. In an effort to move beyond this debate, we propose that authors presenting diverging models are considering quite different prototypical situations: those focusing on the resolution of complex dilemmas conclude that morality involves sophisticated reasoning, whereas those studying reactions to shocking moral violations find that morality involves quick, affect-laden processes. We articulate these diverging dominant approaches and consider three directions for future research (moral temptation, moral self-image, and lay understandings of morality) that we propose have not received sufficient attention as a result of the focus on these two prototypical situations within moral psychology.

Concluding Thoughts

Recent theorizing on the psychology of moral decision making has pitted deliberative reasoning against quick affect-laden intuitions. In this article, we propose a resolution to this tension by arguing that it results from a choice of different prototypical situations: advocates of the reasoning approach have focused on sophisticated dilemmas, whereas advocates of the intuition/emotion approach have focused on reactions to other people’s moral infractions. Arbitrarily choosing one or the other as the typical moral situation has a significant impact on one’s characterization of moral judgment.

Wednesday, August 21, 2019

Tech Is Already Reading Your Emotions - But Do Algorithms Get It Right?

Jessica Baron
Forbes.com
Originally published July 18, 2019

From measuring shopper satisfaction to detecting signs of depression, companies are employing emotion-sensing facial recognition technology that is based on flawed science, according to a new study.

If the idea of having your face recorded and then analyzed for mood so that someone can intervene in your life sounds creepy, that’s because it is. But that hasn’t stopped companies like Walmart from promising to implement the technology to improve customer satisfaction, despite numerous challenges from ethicists and other consumer advocates.

At the end of the day, this flavor of facial recognition software probably is all about making you safer and happier – it wants to let you know if you’re angry or depressed so you can calm down or get help; it wants to see what kind of mood you’re in when you shop so it can help stores keep you as a customer; it wants to measure your mood while driving, playing video games, or just browsing the Internet to see what goods and services you might like to buy to improve your life.


The problem is – well, aside from the obvious privacy issues and general creep factor – that computers aren’t really that good at judging our moods based on the information they get from facial recognition technology. To top it off, this technology exhibits that same kind of racial bias that other AI programs do, assigning more negative emotions to black faces, for example. That’s probably because it’s based on flawed science.

The info is here.

Saturday, August 10, 2019

Emotions and beliefs about morality can change one another

Monica Bucciarelli and P.N. Johnson-Laird
Acta Psychologica
Volume 198, July 2019

Abstract

A dual-process theory postulates that belief and emotions about moral assertions can affect one another. The present study corroborated this prediction. Experiments 1, 2 and 3 showed that the pleasantness of a moral assertion – from loathing it to loving it – correlated with how strongly individuals believed it, i.e., its subjective probability. But, despite repeated testing, this relation did not occur for factual assertions. To create the correlation, it sufficed to change factual assertions, such as, “Advanced countries are democracies,” into moral assertions, “Advanced countries should be democracies”. Two further experiments corroborated the two-way causal relations for moral assertions. Experiment 4 showed that recall of pleasant memories about moral assertions increased their believability, and that the recall of unpleasant memories had the opposite effect. Experiment 5 showed that the creation of reasons to believe moral assertions increased the pleasantness of the emotions they evoked, and that the creation of reasons to disbelieve moral assertions had the opposite effect. Hence, emotions can change beliefs about moral assertions; and reasons can change emotions about moral assertions. We discuss the implications of these results for alternative theories of morality.

The research is here.

Here is a portion of the Discussion:

In sum, emotions and beliefs correlate for moral assertions, and a change in one can cause a change in the other. The main theoretical problem is to explain these results. They should hardly surprise Utilitarians. As we mentioned in the Introduction, one interpretation of their views (Jon Baron, p.c.) is that it is tautological to predict that if you believe a moral assertion then you will like it. And this interpretation implies that our experiments are studies in semantics, which corroborate the existence of tautologies depending on the meanings of words (contra to Quine, 1953; cf. Quelhas, Rasga, & Johnson-Laird, 2017). But, the degrees to which participants believed the moral assertions varied from certain to impossible.  An assertion that they rated as probable as not is hardly a tautology, and it tended to occur with an emotional reaction of indifference. The hypothesis of a tautological interpretation cannot explain this aspect of an overall correlation in ratings on scales.

Saturday, August 3, 2019

When Do Robots Have Free Will? Exploring the Relationships between (Attributions of) Consciousness and Free Will

Eddy Nahmias, Corey Allen, & Bradley Loveall
Georgia State University

From the Conclusion:

If future research bolsters our initial findings, then it would appear that when people consider whether agents are free and responsible, they are considering whether the agents have capacities to feel emotions more than whether they have conscious sensations or even capacities to deliberate or reason. It’s difficult to know whether people assume that phenomenal consciousness is required for or enhances capacities to deliberate and reason. And of course, we do not deny that cognitive capacities for self-reflection, imagination, and reasoning are crucial for free and responsible agency (see, e.g., Nahmias 2018). For instance, once considering agents that are assumed to have phenomenal consciousness, such as humans, it is likely that people’s attributions of free will and responsibility decrease in response to information that an agent has severely diminished reasoning capacities. But people seem to have intuitions that support the idea that an essential condition for free will is the capacity to experience conscious emotions.  And we find it plausible that these intuitions indicate that people take it to be essential to being a free agent that one can feel the emotions involved in reactive attitudes and in genuinely caring about one’s choices and their outcomes.

(cut)

Perhaps, fiction points us towards the truth here. In most fictional portrayals of artificial intelligence and robots (such as Blade Runner, A.I., and Westworld), viewers tend to think of the robots differently when they are portrayed in a way that suggests they express and feel emotions.  No matter how intelligent or complex their behavior, the robots do not come across as free and autonomous until they seem to care about what happens to them (and perhaps others). Often this is portrayed by their showing fear of their own or others’ deaths, or expressing love, anger, or joy. Sometimes it is portrayed by the robots’ expressing reactive attitudes, such as indignation about how humans treat them, or our feeling such attitudes towards them, for instance when they harm humans.

The research paper is here.

Tuesday, March 26, 2019

Should doctors cry at work?

Fran Robinson
BMJ 2019;364:l690

Many doctors admit to crying at work, whether openly empathising with a patient or on their own behind closed doors. Common reasons for crying are compassion for a dying patient, identifying with a patient’s situation, or feeling overwhelmed by stress and emotion.

Probably still more doctors have done so but been unwilling to admit it for fear that it could be considered unprofessional—a sign of weakness, lack of control, or incompetence. However, it’s increasingly recognised as unhealthy for doctors to bottle up their emotions.

Unexpected tragic events
Psychiatry is a specialty in which doctors might view crying as acceptable, says Annabel Price, visiting researcher at the Department of Psychiatry, University of Cambridge, and a consultant in liaison psychiatry for older adults.

Having discussed the issue with colleagues before being interviewed for this article, she says that none of them would think less of a colleague for crying at work: “There are very few doctors who haven’t felt like crying at work now and again.”

A situation that may move psychiatrists to tears is finding that a patient they’ve been closely involved with has died by suicide. “This is often an unexpected tragic event: it’s very human to become upset, and sometimes it’s hard not to cry when you hear difficult news,” says Price.

The info is here.

Wednesday, February 13, 2019

The Art of Decision-Making

Joshua Rothman
The New Yorker
Originally published January 21, 2019

Here is an excerpt:

For centuries, philosophers have tried to understand how we make decisions and, by extension, what makes any given decision sound or unsound, rational or irrational. “Decision theory,” the destination on which they’ve converged, has tended to hold that sound decisions flow from values. Faced with a choice—should we major in economics or in art history?—we first ask ourselves what we value, then seek to maximize that value.

From this perspective, a decision is essentially a value-maximizing equation. If you’re going out and can’t decide whether to take an umbrella, you could come to a decision by following a formula that assigns weights to the probability of rain, the pleasure you’ll feel in strolling unencumbered, and the displeasure you’ll feel if you get wet. Most decisions are more complex than this, but the promise of decision theory is that there’s a formula for everything, from launching a raid in Abbottabad to digging an oil well in the North Sea. Plug in your values, and the right choice pops out.

In recent decades, some philosophers have grown dissatisfied with decision theory. They point out that it becomes less useful when we’re unsure what we care about, or when we anticipate that what we care about might shift.

The info is here.

Sunday, January 27, 2019

Expectations Bias Moral Evaluations

Derek Powell and Zachary Horne
PsyArXiv Preprints
Originally created on December 23, 2018

Abstract

People’s expectations play an important role in their reactions to events. There is often disappointment when events fail to meet expectations and a special thrill to having one’s expectations exceeded. We propose that expectations influence evaluations through information-theoretic principles: less expected events do more to inform us about the state of the world than do more expected events. An implication of this proposal is that people may have inappropriately muted responses to morally significant but expected events. In two preregistered experiments, we found that people’s judgments of morally-significant events were affected by the likelihood of that event. People were more upset about events that were unexpected (e.g., a robbery at a clothing store) than events that were more expected (e.g., a robbery at a convenience store). We argue that this bias has pernicious moral consequences, including leading to reduced concern for victims in most need of help.

The preprint is here.

Thursday, December 6, 2018

Survey Finds Widespread 'Moral Distress' Among Veterinarians

Carey Goldberg
NPR.org
Originally posted October 17, 2018

In some ways, it can be harder to be a doctor of animals than a doctor of humans.

"We are in the really unenviable, and really difficult, position of caring for patients maybe for their entire lives, developing our own relationships with those animals — and then being asked to kill them," says Dr. Lisa Moses, a veterinarian at the Massachusetts Society for the Prevention of Cruelty to Animals-Angell Animal Medical Center and a bioethicist at Harvard Medical School.

She's the lead author of a study published Monday in the Journal of Veterinary Internal Medicine about "moral distress" among veterinarians. The survey of more than 800 vets found that most feel ethical qualms — at least sometimes — about what pet owners ask them to do. And that takes a toll on their mental health.

Dr. Virginia Sinnott-Stutzman is all too familiar with the results. As a senior staff veterinarian in emergency and critical care at Angell, she sees a lot of very sick animals — and quite a few decisions by owners that trouble her.

Sometimes, owners elect to have their pets put to sleep because they can't or won't pay for treatment, she says. Or the opposite, "where we know in our heart of hearts that there is no hope to save the animal, or that the animal is suffering and the owners have a set of beliefs that make them want to keep going."

The info is here.

Monday, November 12, 2018

Optimality bias in moral judgment

Julian De Freitas and Samuel G. B. Johnson
Journal of Experimental Social Psychology
Volume 79, November 2018, Pages 149-163

Abstract

We often make decisions with incomplete knowledge of their consequences. Might people nonetheless expect others to make optimal choices, despite this ignorance? Here, we show that people are sensitive to moral optimality: that people hold moral agents accountable depending on whether they make optimal choices, even when there is no way that the agent could know which choice was optimal. This result held up whether the outcome was positive, negative, inevitable, or unknown, and across within-subjects and between-subjects designs. Participants consistently distinguished between optimal and suboptimal choices, but not between suboptimal choices of varying quality — a signature pattern of the Efficiency Principle found in other areas of cognition. A mediation analysis revealed that the optimality effect occurs because people find suboptimal choices more difficult to explain and assign harsher blame accordingly, while moderation analyses found that the effect does not depend on tacit inferences about the agent's knowledge or negligence. We argue that this moral optimality bias operates largely out of awareness, reflects broader tendencies in how humans understand one another's behavior, and has real-world implications.

The research is here.

Thursday, October 18, 2018

When You Fear Your Company Has Forgotten Its Principles

Sue Shellenbarger
The Wall Street Journal
Originally published September 17, 2018

Here is an excerpt:

People who object on principle to their employers’ conduct face many obstacles. One is the bystander effect—people’s reluctance to intervene against wrongdoing when others are present and witnessing it too, Dr. Grant says. Ask yourself in such cases, “If no one acted here, what would be the consequences?” he says. While most people think first about potential damage to their reputation and relationships, the long-term effects could be worse, he says.

Be careful not to argue too passionately for the changes you want, Dr. Grant says. Show respect for others’ viewpoint, and acknowledge the flaws in your argument to show you’ve thought it through carefully.

Be open about your concerns, says Jonah Sachs, an Oakland, Calif., speaker and author of “Unsafe Thinking,” a book on creative risk-taking. People who complain in secret are more likely to make enemies and be seen as disloyal, compared with those who resist in the open, research shows.

Successful change-makers tend to frame proposed changes as benefiting the entire company and its employees and customers, rather than just themselves, Mr. Sachs says. He cites a former executive at a retail drug chain who helped persuade top management to stop selling cigarettes in its stores. While the move tracked with the company’s health-focused mission, the executive strengthened her case by correctly predicting that it would attract more health-minded customers.

The info is here.

Tuesday, August 28, 2018

How Evil Happens

Noga Arikha
www.aeon.co
Originally posted July 30, 2018

Here is an excerpt:

An account of the inability to feel any emotion for such perceived enemies can take us closer to understanding what it is like to have crossed the line beyond which one can maim and kill in cold blood. Observers at the International Criminal Court (ICC) at the Hague note frequently the absence of remorse displayed by perpetrators. The clinical psychologist Françoise Sironi, who assesses perpetrators for the ICC and treats them and their victims, has directly seen what Lifton called the ‘murder of the self’ at work – notably with Kang Kek Iew, the man known as ‘Duch’, who proudly created and directed the Khmer Rouge S-21 centre for torture and extermination in Cambodia. Duch was one of those who felt absolutely no remorse. His sole identity was his role, dutifully kept up for fear of losing himself and falling into impotence. He did not comprehend what Sironi meant when she asked him: ‘What happened to your conscience?’ The very question was gibberish to him.

Along with what Fried calls this ‘catastrophic’ desensitisation to emotional cues, cognitive functions remain intact – another Syndrome E symptom. A torturer knows exactly how to hurt, in full recognition of the victim’s pain. He – usually he – has the cognitive capacity, necessary but not sufficient for empathy, to understand the victim’s experience. He just does not care about the other’s pain except instrumentally. Further, he does not care that he does not care. Finally, he does not care that caring does, in fact, matter. The emotionally inflected judgment that underlies the moral sense is gone.

The information is here.

Tuesday, August 14, 2018

Natural-born existentialists

Ronnie de Sousa
aeon.com
Originally posted December 10, 2017

Here are two excerpts:

Much the same might be true of some of the emotional dispositions bequeathed to us by natural selection. If we follow some evolutionary psychologists in thinking that evolution has programmed us to value solidarity and authority, for example, we must recognise that those very same mechanisms promote xenophobia, racism and fascism. Some philosophers have made much of the fact that we appear to have genuinely altruistic motives: sometimes, human beings actually sacrifice themselves for complete strangers. If that is indeed a native human trait, so much the better. But it can’t be good because it’s natural. For selfishness and cruelty are no less natural. Again, naturalness can’t reasonably be why we value what we care about.

A second reason why evolution is not providence is that any given heritable trait is not simply either ‘adaptive’ or ‘maladaptive’ for the species. Some cases of fitness are frequency-dependent, which means that certain traits acquire a stable distribution in a population only if they are not universal.

(cut)

The third reason we should not equate the natural with the good is the most important. Evolution is not about us. In repeating the well-worn phrase that is supposed to sum up natural selection, ‘survival of the fittest’, we seldom think to ask: the fittest what? It won’t do to think that the phrase refers to fitness in individuals such as you and me. Even the fittest individuals never survive at all. We all die. What does survive is best described as information, much of which is encoded in the genes. That remains true despite the fashionable preoccupation with ‘epigenetic’ or otherwise non-DNA-encoded factors. The point is that ‘the fittest’ refers to just whatever gets replicated in subsequent generations – and whatever that is, it isn’t us. Every human is radically new, and – at least until cloning becomes routine – none will ever recur.

The article is here.

Thursday, June 14, 2018

Sex robots are coming. We might even fall in love with them.

Sean Illing
www.vox.com
Originally published May 11, 2018

Here is an excerpt:

Sean Illing: Your essay poses an interesting question: Is mutual love with a robot possible? What’s the answer?

Lily Eva Frank:

Our essay tried to explore some of the core elements of romantic love that people find desirable, like the idea of being a perfect match for someone or the idea that we should treasure the little traits that make someone unique, even those annoying flaws or imperfections.

The key thing is that we love someone because there’s something about being with them that matters, something particular to them that no one else has. And we make a commitment to that person that holds even when they change, like aging, for example.

Could a robot do all these things? Our answer is, in theory, yes. But only a very advanced form of artificial intelligence could manage it because it would have to do more than just perform as if it were a person doing the loving. The robot would have to have feelings and internal experiences. You might even say that it would have to be self-aware.

But that would leave open the possibility that the sex bot might not want to have sex with you, which sort of defeats the purpose of developing these technologies in the first place.

(cut)

I think people are weird enough that it is probably possible for them to fall in love with a cat or a dog or a machine that doesn’t reciprocate the feelings. A few outspoken proponents of sex dolls and robots claim they love them. Check out the testimonials page on the websites of sex doll manufactures; they say things like, “Three years later, I love her as much as the first day I met her.” I don’t want to dismiss these people’s reports.

The information is here.

Tuesday, June 12, 2018

Is it Too Soon? The Ethics of Recovery from Grief

John Danaher
Philosophical Disquisitions
Originally published May 11, 2106

Here is an excerpt:

This raises an obvious and important question in the ethics of grief recovery. Is there a certain mourning period that should be observed following the death of a loved one? If you get back on your feet too quickly, does that say something negative about the relationship you had with the person who died (or about you)? To be more pointed: if I can re-immerse myself in my work a mere three weeks after my sister’s death, does that mean there is something wrong with me or something deficient in the relationship I had with her?

There is a philosophical literature offering answers to these questions, but from what I have read the majority of it does not deal with the ethics of recovering from a sibling’s death. Indeed, I haven’t found anything that deals directly with this issue. Instead, the majority of the literature deals with the ethics of recovery from the death of a spouse or intimate partner. What’s more, when they discuss that topic, they seem to have one scenario in mind: how soon is too soon when it comes to starting an intimate relationship with another person?

Analysing the ethical norms that should apply to that scenario is certainly of value, but it is hardly the only scenario worthy of consideration, and it is obviously somewhat distinct from the scenario that I am facing. I suspect that different norms apply to different relationships and this is likely to affect the ethics of recovery across those different relationship types.

The information is here.