Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Relationships. Show all posts
Showing posts with label Relationships. Show all posts

Monday, September 27, 2021

An African Theory of Moral Status: A Relational Alternative to Individualism and Holism.

Metz, T. (2012).
Ethic Theory Moral Prac 15, 387–402. 
https://doi.org/10.1007/s10677-011-9302-y

Abstract

The dominant conceptions of moral status in the English-speaking literature are either holist or individualist, neither of which accounts well for widespread judgments that: animals and humans both have moral status that is of the same kind but different in degree; even a severely mentally incapacitated human being has a greater moral status than an animal with identical internal properties; and a newborn infant has a greater moral status than a mid-to-late stage foetus. Holists accord no moral status to any of these beings, assigning it only to groups to which they belong, while individualists such as welfarists grant an equal moral status to humans and many animals, and Kantians accord no moral status either to animals or severely mentally incapacitated humans. I argue that an underexplored, modal-relational perspective does a better job of accounting for degrees of moral status. According to modal-relationalism, something has moral status insofar as it capable of having a certain causal or intensional connection with another being. I articulate a novel instance of modal-relationalism grounded in salient sub-Saharan moral views, roughly according to which the greater a being's capacity to be part of a communal relationship with us, the greater its moral status. I then demonstrate that this new, African-based theory entails and plausibly explains the above judgments, among others, in a unified way.

From the end of the article:

Those deeply committed to holism and individualism, or even a combination of them, may well not be convinced by this discussion. Diehard holists will reject the idea that anything other than a group can ground moral status, while pure individualists will reject the recurrent suggestion that two beings that are internally identical (foetus v neonate, severely mentally incapacitated human v animal) could differ in their moral status. However, my aim has not been to convince anyone to change her mind, or even to provide a complete justification for doing so. My goals have instead been the more limited ones of articulating a new, modal-relational account of moral status grounded in sub-Saharan moral philosophy, demonstrating that it avoids the severe parochialism facing existing relational accounts, and showing that it accounts better than standard Western theories for a variety of widely shared intuitions about what has moral status and to what degree. Many of these intuitions are captured by neither holism nor individualism and have lacked a firm philosophical foundation up to now. Of importance here is the African theory’s promise to underwrite the ideas that humans and animals have a moral status grounded in the same property that differs in degree, that severely mentally incapacitated humans have a greater moral status than animals with the same internal properties, and that a human’s moral status increases as it develops from the embryonic to foetal to neo-natal stages.

Saturday, August 21, 2021

The relational logic of moral inference

Crockett, M., Everett, J. A. C., Gill, M., & Siegel, J. 
(2021, July 9). https://doi.org/10.31234/osf.io/82c6y

Abstract

How do we make inferences about the moral character of others? Here we review recent work on the cognitive mechanisms of moral inference and impression updating. We show that moral inference follows basic principles of Bayesian inference, but also departs from the standard Bayesian model in ways that may facilitate the maintenance of social relationships. Moral inference is not only sensitive to whether people make moral decisions, but also to features of decisions that reveal their suitability as a relational partner. Together these findings suggest that moral inference follows a relational logic: people form and update moral impressions in ways that are responsive to the demands of ongoing social relationships and particular social roles. We discuss implications of these findings for theories of moral cognition and identify new directions for research on human morality and person perception.

Summary

There is growing evidence that people infer moral character from behaviors that are not explicitly moral. The data so far suggest that people who are patient, hard-working, tolerant of ambiguity, risk-averse, and actively open-minded are seen as more moral and trustworthy. While at first blush this collection of preferences may seem arbitrary, considering moral inference from a relational perspective reveals a coherent logic. All of these preferences are correlated with cooperative behavior, and comprise traits that are desirable for long-term relationship partners. Reaping the benefits of long-term relationships requires patience and a tolerance for ambiguity: sometime people make mistakes despite good intentions. Erring on the side of caution and actively seeking evidence to inform decision-making in social situations not only helps prevent harmful outcomes (Kappes et al., 2019), but also signals respect: social life is fraught with uncertainty (FeldmanHall & Shenhav, 2019; Kappes et al., 2019), and assuming we know what’s best for another person can have bad consequences, even when our intentions are good.  If evidence continues to suggest that certain types of non-moral preferences are preferred in social partners, partner choice mechanisms may explain the prevalence of those preferences in the broader population.

Monday, July 5, 2021

When Do Robots have Free Will? Exploring the Relationships between (Attributions of) Consciousness and Free Will

Nahmias, E., Allen, C. A., & Loveall, B.
In Free Will, Causality, & Neuroscience
Chapter 3

Imagine that, in the future, humans develop the technology to construct humanoid robots with very sophisticated computers instead of brains and with bodies made out of metal, plastic, and synthetic materials. The robots look, talk, and act just like humans and are able to integrate into human society and to interact with humans across any situation. They work in our offices and our restaurants, teach in our schools, and discuss the important matters of the day in our bars and coffeehouses. How do you suppose you’d respond to one of these robots if you were to discover them attempting to steal your wallet or insulting your friend? Would you regard them as free and morally responsible agents, genuinely deserving of blame and punishment?

If you’re like most people, you are more likely to regard these robots as having free will and being morally responsible if you believe that they are conscious rather than non-conscious. That is, if you think that the robots actually experience sensations and emotions, you are more likely to regard them as having free will and being morally responsible than if you think they simply behave like humans based on their internal programming but with no conscious experiences at all. But why do many people have this intuition? Philosophers and scientists typically assume that there is a deep connection between consciousness and free will, but few have developed theories to explain this connection. To the extent that they have, it’s typically via some cognitive capacity thought to be important for free will, such as reasoning or deliberation, that consciousness is supposed to enable or bolster, at least in humans. But this sort of connection between consciousness and free will is relatively weak. First, it’s contingent; given our particular cognitive architecture, it holds, but if robots or aliens could carry out the relevant cognitive capacities without being conscious, this would suggest that consciousness is not constitutive of, or essential for, free will. Second, this connection is derivative, since the main connection goes through some capacity other than consciousness. Finally, this connection does not seem to be focused on phenomenal consciousness (first-person experience or qualia), but instead, on access consciousness or self-awareness (more on these distinctions below).

From the Conclusion

In most fictional portrayals of artificial intelligence and robots (such as Blade Runner, A.I., and Westworld), viewers tend to think of the robots differently when they are portrayed in a way that suggests they express and feel emotions. No matter how intelligent or complex their behavior, they do not come across as free and autonomous until they seem to care about what happens to them (and perhaps others). Often this is portrayed by their showing fear of their own death or others, or expressing love, anger, or joy. Sometimes it is portrayed by the robots’ expressing reactive attitudes, such as indignation, or our feeling such attitudes towards them. Perhaps the authors of these works recognize that the robots, and their stories, become most interesting when they seem to have free will, and people will see them as free when they start to care about what happens to them, when things really matter to them, which results from their experiencing the actual (and potential) outcomes of their actions.


Saturday, July 3, 2021

Binding moral values gain importance in the presence of close others


Yudkin, D.A., Gantman, A.P., Hofmann, W. et al. 
Nat Commun 12, 2718 (2021). 
https://doi.org/10.1038/s41467-021-22566-6

Abstract

A key function of morality is to regulate social behavior. Research suggests moral values may be divided into two types: binding values, which govern behavior in groups, and individualizing values, which promote personal rights and freedoms. Because people tend to mentally activate concepts in situations in which they may prove useful, the importance they afford moral values may vary according to whom they are with in the moment. In particular, because binding values help regulate communal behavior, people may afford these values more importance when in the presence of close (versus distant) others. Five studies test and support this hypothesis. First, we use a custom smartphone application to repeatedly record participants’ (n = 1166) current social context and the importance they afforded moral values. Results show people rate moral values as more important when in the presence of close others, and this effect is stronger for binding than individualizing values—an effect that replicates in a large preregistered online sample (n = 2016). A lab study (n = 390) and two preregistered online experiments (n = 580 and n = 752) provide convergent evidence that people afford binding, but not individualizing, values more importance when in the real or imagined presence of close others. Our results suggest people selectively activate different moral values according to the demands of the situation, and show how the mere presence of others can affect moral thinking.

From the Discussion

Our findings converge with work highlighting the practical contexts where binding values are pitted against individualizing ones. Research on the psychology of whistleblowing, for example, suggests that the decision over whether to report unethical behavior in one’s own organization reflects a tradeoff between loyalty (to one’s community) and fairness (to society in general). Other research has found that increasing or decreasing people’s “psychological distance” from a situation affects the degree to which they apply binding versus individualizing principles. For example, research shows that prompting people to take a detached (versus immersed) perspective on their own actions renders them more likely to apply impartial principles in punishing close others for moral transgressions. By contrast, inducing feelings of empathy toward others (which could be construed as increasing feelings of psychological closeness) increases people’s likelihood of showing favoritism toward them in violation of general fairness norms. Our work highlights a psychological process that might help to explain these patterns of behavior: people are more prone to act according to binding values when they are with close others precisely because that relational context activates those values in the mind.

Wednesday, June 30, 2021

Extortion, intuition, and the dark side of reciprocity

Bernhard, R., & Cushman, F. A. 
(2021, April 22). 
https://doi.org/10.31234/osf.io/kycwa

Abstract

Extortion occurs when one person uses some combination of threats and promises to extract an unfair share of benefits from another. Although extortion is a pervasive feature of human interaction, it has received relatively little attention in psychological research. To this end, we begin by observing that extortion is structured quite similarly to far better-studied “reciprocal” social behaviors, such as conditional cooperation and retributive punishment. All of these strategies are designed to elicit some desirable behavior from a social partner, and do so by constructing conditional incentives; the main difference is that the desired behavioral response is an unfair or unjust allocation of resources during extortion, whereas it is often a fair or just distribution of resources for reciprocal cooperation and punishment. Thus, we conjecture, a common set of psychological mechanisms may render these strategies successful. We know from prior work that prosocial forms of reciprocity often work best when implemented inflexibly and intuitively, rather than deliberatively. This both affords long-term commitment to the reciprocal strategy, and also signals this commitment to social partners. We argue that, for the same reasons, extortion is likely to depend largely upon inflexible, intuitive psychological processes. Several existing lines of circumstantial evidence support this conjecture.

From the Conclusion

An essential part of our analysis is to characterize strategies, rather than individual behaviors, as “prosocial” or “antisocial”.  Extortionate strategies can be  implemented by behaviors that “help” (as  in  the case of a manager who gives promotions to those who work uncompensated hours), while prosocial strategies can be implemented by behaviors that harm (as in the case of the CEO who finds out and reprimands this manager).   This manner of thinking at the level of strategies, rather than behavior, invites a broader realignment of our perspective on the relationship between intuition and social behavior. If our focus were on individual behaviors, we might have posed  the question, “Does intuition support cooperation or defection?”.  Framed  this way,  the recent literature could be taken to suggest the answer is “cooperation”—and,  therefore, that intuition promotes prosociality. Surely this is often true, but we suggest that intuitive cooperation can also serve antisocial ends. Meanwhile, as we have emphasized, a prosocial strategy such as TFT  may  benefit  from intuitive (reciprocal) defection. Quickly, the question, “Does intuition support cooperation or defection?”—and  any  implied  relationship to the question  “Does intuition support prosocial or antisocial behavior?”—begins to look ill-posed.

Monday, June 28, 2021

You are a network

Kathleen Wallace
aeon.com
Originally published

Here is an excerpt:

Social identities are traits of selves in virtue of membership in communities (local, professional, ethnic, religious, political), or in virtue of social categories (such as race, gender, class, political affiliation) or interpersonal relations (such as being a spouse, sibling, parent, friend, neighbour). These views imply that it’s not only embodiment and not only memory or consciousness of social relations but the relations themselves that also matter to who the self is. What philosophers call ‘4E views’ of cognition – for embodied, embedded, enactive and extended cognition – are also a move in the direction of a more relational, less ‘container’, view of the self. Relational views signal a paradigm shift from a reductive approach to one that seeks to recognise the complexity of the self. The network self view further develops this line of thought and says that the self is relational through and through, consisting not only of social but also physical, genetic, psychological, emotional and biological relations that together form a network self. The self also changes over time, acquiring and losing traits in virtue of new social locations and relations, even as it continues as that one self.

How do you self-identify? You probably have many aspects to yourself and would resist being reduced to or stereotyped as any one of them. But you might still identify yourself in terms of your heritage, ethnicity, race, religion: identities that are often prominent in identity politics. You might identify yourself in terms of other social and personal relationships and characteristics – ‘I’m Mary’s sister.’ ‘I’m a music-lover.’ ‘I’m Emily’s thesis advisor.’ ‘I’m a Chicagoan.’ Or you might identify personality characteristics: ‘I’m an extrovert’; or commitments: ‘I care about the environment.’ ‘I’m honest.’ You might identify yourself comparatively: ‘I’m the tallest person in my family’; or in terms of one’s political beliefs or affiliations: ‘I’m an independent’; or temporally: ‘I’m the person who lived down the hall from you in college,’ or ‘I’m getting married next year.’ Some of these are more important than others, some are fleeting. The point is that who you are is more complex than any one of your identities. Thinking of the self as a network is a way to conceptualise this complexity and fluidity.

Let’s take a concrete example. Consider Lindsey: she is spouse, mother, novelist, English speaker, Irish Catholic, feminist, professor of philosophy, automobile driver, psychobiological organism, introverted, fearful of heights, left-handed, carrier of Huntington’s disease (HD), resident of New York City. This is not an exhaustive set, just a selection of traits or identities. Traits are related to one another to form a network of traits. Lindsey is an inclusive network, a plurality of traits related to one another. The overall character – the integrity – of a self is constituted by the unique interrelatedness of its particular relational traits, psychobiological, social, political, cultural, linguistic and physical.

Thursday, May 13, 2021

Technology and the Value of Trust: Can we trust technology? Should we?

John Danaher
Philosophical Disquisitions
Originally published 30 Mar 21

Can we trust technology? Should we try to make technology, particularly AI, more trustworthy? These are questions that have perplexed philosophers and policy-makers in recent years. The EU’s High Level Expert Group on AI has, for example, recommended that the primary goal of AI policy and law in the EU should be to make the technology more trustworthy. But some philosophers have critiqued this goal as being borderline incoherent. You cannot trust AI, they say, because AI is just a thing. Trust can exist between people and other people, not between people and things.

This is an old debate. Trust is a central value in human society. The fact that I can trust my partner not to betray me is one of the things that makes our relationship workable and meaningful. The fact that I can trust my neighbours not to kill me is one of the things that allows me to sleep at night. Indeed, so implicit is this trust that I rarely think about it. It is one of the background conditions that makes other things in my life possible. Still, it is true that when I think about trust, and when I think about what it is that makes trust valuable, I usually think about trust in my relationships with other people, not my relationships with things.

But would it be so terrible to talk about trust in technology? Should we use some other term instead such as ‘reliable’ or ‘confidence-inspiring’? Or should we, as some blockchain enthusiasts have argued, use technology to create a ‘post-trust’ system of social governance?

I want to offer some quick thoughts on these questions in this article. I will do so in three stages. First, I will briefly review some of the philosophical debates about trust in people and trust in things. Second, I will consider the value of trust, distinguishing between its intrinsic and extrinsic components. Third, I will suggest that it is meaningful to talk about trust in technology, but that the kind of trust we have in technology has a different value to the kind of trust we have in other people. Finally, I will argue that most talk about building ‘trustworthy’ technology is misleading: the goal of most of these policies is to obviate or override the need for trust.

Saturday, May 1, 2021

Could you hate a robot? And does it matter if you could?

Ryland, H. 
AI & Soc (2021).
https://doi.org/10.1007/s00146-021-01173-5

Abstract

This article defends two claims. First, humans could be in relationships characterised by hate with some robots. Second, it matters that humans could hate robots, as this hate could wrong the robots (by leaving them at risk of mistreatment, exploitation, etc.). In defending this second claim, I will thus be accepting that morally considerable robots either currently exist, or will exist in the near future, and so it can matter (morally speaking) how we treat these robots. The arguments presented in this article make an important original contribution to the robo-philosophy literature, and particularly the literature on human–robot relationships (which typically only consider positive relationship types, e.g., love, friendship, etc.). Additionally, as explained at the end of the article, my discussions of robot hate could also have notable consequences for the emerging robot rights movement. Specifically, I argue that understanding human–robot relationships characterised by hate could actually help theorists argue for the rights of robots.

Conclusion

This article has argued for two claims. First, humans could be in relationships characterised by hate with morally considerable robots. Second, it matters that humans could hate these robots. This is at least partly because such hateful relations could have long-term negative effects for the robot (e.g., by encouraging bad will towards the robots). The article ended by explaining how discussions of human–robot relationships characterised by hate are connected to discussions of robot rights. I argued that the conditions for a robot being an object of hate and for having rights are the same—being sufficiently person-like. I then suggested how my discussions of human–robot relationships characterised by hate could be used to support, rather than undermine, the robot rights movement.

Sunday, January 10, 2021

Doctors Dating Patients: Love, Actually?

Shelly Reese
medscape.com
Originally posted 10 Dec 20

Here is an excerpt:

Not surprisingly, those who have seen such relationships end in messy, contentious divorces or who know stories of punitive actions are stridently opposed to the idea. "Never! Grounds for losing your license"; "it could only result in trouble"; "better to keep this absolute"; "you're asking for a horror story," wrote four male physicians.

Although doctor-patient romances don't frequently come to the attention of medical boards or courts until they have soured, even "happy ending" relationships may come at a cost. For example, in 2017, the Iowa Board of Medicine fined an orthopedic surgeon $5000 and ordered him to complete a professional boundaries program because he became involved with a patient while or soon after providing care, despite the fact that the couple had subsequently married.

Ethics aside, "this is a very dangerous situation, socially and professionally," writes a male physician in Pennsylvania. A New York physician agreed: "Many of my colleagues marry their patients, even after they do surgery on them. It's a sticky situation."

Doctors' Attitudes Are Shifting

The American Medical Association clearly states that sexual contact that is concurrent with the doctor/patient relationship constitutes sexual misconduct and that even a romance with a former patient "may be unduly influenced by the previous physician-patient relationship."

Although doctors' attitudes on the subject are evolving, that's not to say they suddenly believe they can start asking their patients out to dinner. Very few doctors (2%) condone romantic relationships with existing patients — a percentage that has remained largely unchanged over the past 10 years. Instead, physicians are taking a more nuanced approach to the issue.

Monday, December 21, 2020

Physicians' Ethics Change With Societal Trends

Batya S. Yasgur
MedScape.com
Originally posted 23 Nov 20

Here is an excerpt:

Are Romantic Relationships With Patients Always Off Limits?

Medscape asked physicians whether it was acceptable to become romantically or sexually involved with a patient. Compared to 2010, in 2020, many more respondents were comfortable with having a relationship with a former patient after 6 months had elapsed. In 2020, 2% said they were comfortable having a romance with a current patient; 26% were comfortable being romantic with a person who had stopped being a patient 6 months earlier, but 62% said flat-out 'no' to the concept. In 2010, 83% said "no" to the idea of dating a patient; fewer than 1% agreed that dating a current patient was acceptable, and 12% said it was okay after 6 months.

Some respondents felt strongly that romantic or sexual involvement is always off limits, even months or years after the physician is no longer treating the patient. "Once a patient, always a patient," wrote a psychiatrist.

On the other hand, many respondents thought being a "patient" was not a lifelong status. An orthopedic surgeon wrote, "After 6 months, they are no longer your patient." Several respondents said involvement was okay if the physician stopped treating the patient and referred the patient to another provider. Others recommended a longer wait time.

"Although most doctors have traditionally kept their personal and professional lives separate, they are no longer as bothered by bending of boundaries and have found a zone of acceptability in the 6-month waiting period," Goodman said.

Packer added that the "greater relaxation of sexual standards and boundaries in general" might have had a bearing on survey responses because "doctors are part of those changing societal norms."

Evans suggested that the rise of individualism and autonomy partially accounts for the changing attitudes toward physician-patient (or former patient) relationships. "Being prohibited from having a relationship with a patient or former patient is increasingly being seen as an infringement on civil liberties and autonomy, which is a major theme these days."

Wednesday, November 11, 2020

How social relationships shape moral judgment

Earp, B. D.,  et al. (2020, September 18).

Abstract

Our judgments of whether an action is morally wrong depend on who is involved and their relationship to one another. But how, when, and why do social relationships shape such judgments? Here we provide new theory and evidence to address this question. In a pre- registered study of U.S. participants (n = 423, nationally representative for age, race and gender), we show that particular social relationships (like those between romantic partners, housemates, or siblings) are normatively expected to serve distinct cooperative functions – including care, reciprocity, hierarchy, and mating – to different degrees. In a second pre- registered study (n = 1,320) we show that these relationship-specific norms, in turn, influence the severity of moral judgments concerning the wrongness of actions that violate cooperative expectations. These data provide evidence for a unifying theory of relational morality that makes highly precise out-of-sample predictions about specific patterns of moral judgments across relationships. Our findings show how the perceived morality of actions depends not only on the actions themselves, but also on the relational context in which those actions occur.

From the Discussion

In other relationships, by contrast, such as those between friends, family members, or romantic partners --so-called “communal” relationships --reciprocity takes a different form: that of mutually expected responsiveness to one another’s needs. In this form of reciprocity, each party tracks the other’s needs (rather than specific benefits provided) and strives to meet these needs to the best of their respective abilities, in proportion to the degree of responsibility each has assumed for the other’s welfare. Future work should distinguish between these two types of reciprocity: that is, mutual care-based reciprocity in communal relationships (when both partners have similar needs and abilities) and tit-for-tat reciprocity between “transactional” cooperation partners who have equal standing or claim on a resource.

Friday, November 6, 2020

Deluded, with reason

Huw Green
aeon.co
Originally published 31 Aug 20

Here is an excerpt:

Of course, beliefs don’t exist only in a private mental context, but can also be held in place by our relationships and social commitments. Consider how political identities often involve a cluster of commitments to various beliefs, even where there is no logical connection between them – for instance, how a person who advocates for say, trans rights, is also more likely to endorse Left-wing economic policies. As the British clinical psychologist Vaughan Bell and his colleagues note in their preprint, ‘De-rationalising Delusions’ (2019), beliefs facilitate affiliation and intragroup trust. They cite earlier philosophical work by others that suggests ‘reasoning is not for the refinement of personal knowledge … but for argumentation, social communication and persuasion’. Indeed, our relationships usually ground our beliefs in a beneficial way, preventing us from developing ideas too disparate from those of our peers, and helping us to maintain a set of ‘healthy’ beliefs that promote our basic wellbeing and continuity in our sense of self.

Given the social function of beliefs, it’s little surprise that delusions usually contain social themes. Might delusion then be a problem of social affiliation, rather than a purely cognitive issue? Bell’s team make just this claim, proposing that there is a broader dysfunction to what they call ‘coalitional cognition’ (important for handling social relationships) involved in the generation of delusions. Harmful social relationships and experiences could play a role here. It is now widely acknowledged that there is a connection between traumatic experiences and symptoms of psychosis. It’s easy to see how trauma could have a pervasive impact on a person’s sense of how safe and trustworthy the world feels, in turn affecting their belief systems.

The British philosopher Matthew Ratcliffe and his colleagues made this point in their 2014 paper, observing how ‘traumatic events are often said to “shatter” a way of experiencing the world and other people that was previously taken for granted’. They add that a ‘loss of trust in the world involves a pronounced and widespread sense of unpredictability’ that could make people liable to delusions because the ideas we entertain are likely to be shaped by what feels plausible in the context of our subjective experience. Loss of trust is not the same as the absence of a grounding belief, but I would argue that it bears an important similarity. When we lose trust in something, we might say that we find it hard to believe in it. Perhaps loss of certain forms of ordinary belief, especially around close social relationships, makes it possible to acquire beliefs of a different sort altogether.

Wednesday, October 28, 2020

Should we campaign against sex robots?

Danaher, J., Earp, B. D., & Sandberg, A. (forthcoming). 
In J. Danaher & N. McArthur (Eds.) 
Robot Sex: Social and Ethical Implications
Cambridge, MA: MIT Press.

Abstract: 

In September 2015 a well-publicised Campaign Against Sex Robots (CASR) was launched. Modelled on the longer-standing Campaign to Stop Killer Robots, the CASR opposes the development of sex robots on the grounds that the technology is being developed with a particular model of female-male relations (the
prostitute-john model) in mind, and that this will prove harmful in various ways. In this chapter, we consider carefully the merits of campaigning against such a technology. We make three main arguments. First, we argue that the particular claims advanced by the CASR are unpersuasive, partly due to a lack of clarity about the campaign’s aims and partly due to substantive defects in the main ethical objections put forward by campaign’s founder(s). Second, broadening our inquiry beyond the arguments proferred by the campaign itself, we argue that it would be very difficult to endorse a general campaign against sex robots unless one embraced a highly conservative attitude towards the ethics of sex, which is likely to be unpalatable to those who are active in the campaign. In making this argument we draw upon lessons
from the campaign against killer robots. Finally, we conclude by suggesting that although a generalised campaign against sex robots is unwarranted, there are legitimate concerns that one can raise about the development of sex robots.

Conclusion

Robots are going to form an increasingly integral part of human social life.  Sex robots are likely to be among them. Though the proponents of the CASR seem deeply concerned by this prospect, we have argued that there is nothing in the nature of sex robots themselves that warrants preemptive opposition to their development.  The arguments of the campaign itself are vague and premised on a misleading
analogy between sex robots and human sex work. Furthermore, drawing upon the example of the Campaign to Stop Killer Robots, we suggest that there are no bad-making properties of sex robots that give rise to similarly serious levels of concern.  The bad-making properties of sex robots are speculative and indirect: preventing their development may not prevent the problems from arising. Preventing the development of killer robots is very different: if you stop the robots you stop the prima facie harm.

In conclusion, we should preemptively campaign against robots when we have reason to think that a moral or practical harm caused by their use can best be avoided or reduced as a result of those efforts. By contrast, to engage in such a campaign as a way of fighting against—or preempting—indirect harms, whose ultimate source is not the technology itself but rather individual choices or broader social institutions, is likely to be a comparative waste of effort.

Tuesday, October 13, 2020

Machine learning uncovers the most robust self-report predictors of relationship quality across 43 longitudinal couples studies

S. Joel and others
Proceedings of the National Academy of Sciences 
Aug 2020, 117 (32) 19061-19071
DOI: 10.1073/pnas.1917036117

Abstract

Given the powerful implications of relationship quality for health and well-being, a central mission of relationship science is explaining why some romantic relationships thrive more than others. This large-scale project used machine learning (i.e., Random Forests) to 1) quantify the extent to which relationship quality is predictable and 2) identify which constructs reliably predict relationship quality. Across 43 dyadic longitudinal datasets from 29 laboratories, the top relationship-specific predictors of relationship quality were perceived-partner commitment, appreciation, sexual satisfaction, perceived-partner satisfaction, and conflict. The top individual-difference predictors were life satisfaction, negative affect, depression, attachment avoidance, and attachment anxiety. Overall, relationship-specific variables predicted up to 45% of variance at baseline, and up to 18% of variance at the end of each study. Individual differences also performed well (21% and 12%, respectively). Actor-reported variables (i.e., own relationship-specific and individual-difference variables) predicted two to four times more variance than partner-reported variables (i.e., the partner’s ratings on those variables). Importantly, individual differences and partner reports had no predictive effects beyond actor-reported relationship-specific variables alone. These findings imply that the sum of all individual differences and partner experiences exert their influence on relationship quality via a person’s own relationship-specific experiences, and effects due to moderation by individual differences and moderation by partner-reports may be quite small. Finally, relationship-quality change (i.e., increases or decreases in relationship quality over the course of a study) was largely unpredictable from any combination of self-report variables. This collective effort should guide future models of relationships.

Significance

What predicts how happy people are with their romantic relationships? Relationship science—an interdisciplinary field spanning psychology, sociology, economics, family studies, and communication—has identified hundreds of variables that purportedly shape romantic relationship quality. The current project used machine learning to directly quantify and compare the predictive power of many such variables among 11,196 romantic couples. People’s own judgments about the relationship itself—such as how satisfied and committed they perceived their partners to be, and how appreciative they felt toward their partners—explained approximately 45% of their current satisfaction. The partner’s judgments did not add information, nor did either person’s personalities or traits. Furthermore, none of these variables could predict whose relationship quality would increase versus decrease over time.

Monday, October 12, 2020

The U.S. Has an Empathy Deficit—Here’s what we can do about it.

Judith Hall and Mark Leary
Scientific American
Originally poste 17 Sept 20

Here are two excerpts:

Fixing this empathy deficit is a challenge because it is not just a matter of having good political or corporate leaders or people treating each other with good will and respect. It is, rather, because empathy is a fundamentally squishy term. Like many broad and complicated concepts, empathy can mean many things. Even the researchers who study it do not always say what they mean, or measure empathy in the same way in their studies—and they definitely do not agree on a definition. In fact, there are stark contradictions: what one researcher calls empathy is not empathy to another.

When laypeople are surveyed on how they define empathy, the range of answers is wide as well. Some people think empathy is a feeling; others focus on what a person does or says. Some think it is being good at reading someone’s nonverbal cues, while others include the mental orientation of putting oneself in someone else’s shoes. Still others see empathy as the ability or effort to imagine others’ feelings, or as just feeling “connected” or “relating” to someone. Some think it is a moral stance to be concerned about other people’s welfare and a desire to help them out. Sometimes it seems like “empathy” is just another way of saying “being a nice and decent person.” Actions, feelings, perspectives, motives, values—all of these are “empathy” according to someone.

(cut)

Whatever people think empathy is, it’s a powerful force and human beings need it. These three things might help to remedy our collective empathy deficit:

Take the time to ask those you encounter how they are feeling, and really listen. Try to put yourself in their shoes. Remember that we all tend to underestimate other people’s emotional distress, and we’re most likely to do so when those people are different from us.

Remind yourself that almost everyone is at the end of their rope these days. Many people barely have enough energy to handle their own problems, so they don’t have their normal ability to think about yours.

Sunday, June 7, 2020

Friends, Lovers or Nothing: Men and Women Differ in Their Perceptions of Sex Robots and Platonic Love Robots

M. Nordmo, J. O. Naess, M. and others
Front. Psychol., 13 March 2020
https://doi.org/10.3389/fpsyg.2020.00355

Abstract

Physical and emotional intimacy between humans and robots may become commonplace over the next decades, as technology improves at a rapid rate. This development provides new questions pertaining to how people perceive robots designed for different kinds of intimacy, both as companions and potentially as competitors. We performed a randomized experiment where participants read of either a robot that could only perform sexual acts, or only engage in non-sexual platonic love relationships. The results of the current study show that females have less positive views of robots, and especially of sex robots, compared to men. Contrary to the expectation rooted in evolutionary psychology, females expected to feel more jealousy if their partner got a sex robot, rather than a platonic love robot. The results further suggests that people project their own feelings about robots onto their partner, erroneously expecting their partner to react as they would to the thought of ones’ partner having a robot.

From the Discussion

The results of the analysis confirms previous findings that males are more positive toward the advent of robots than females (Scheutz and Arnold, 2016). Females who had read about the sex robot reported particularly elevated levels of jealousy, less favorable attitudes, more dislike and more predicted partner’s dislike. This pattern was not found in the male sample, whose feelings were largely unaffected by the type of robot they were made to envision.

One possible explanation for the gender difference could be a combination of differences in how males and females frame the concept of human-robot sexual relations, as well as different attitudes toward masturbation and the use of artificial stimulants for masturbatory purposes.

The research is here.

Tuesday, February 18, 2020

Can an Evidence-Based Approach Improve the Patient-Physician Relationship?

A. S. Cifu, A. Lembo, & A. M. Davis
JAMA. 2020;323(1):31-32.
doi:10.1001/jama.2019.19427

Here is an excerpt:

Through these steps, the research team identified potentially useful clinical approaches that were perceived to contribute to physician “presence,” defined by the authors as a purposeful practice of “awareness, focus, and attention with the intent to understand and connect with patients.”

These practices were rated by patients and clinicians on their likely effects and feasibility in practice. A Delphi process was used to condense 13 preliminary practices into 5 final recommendations, which were (1) prepare with intention, (2) listen intently and completely, (3) agree on what matters most, (4) connect with the patient’s story, and (5) explore emotional cues. Each of these practices is complex, and the authors provide detailed explanations, including narrative examples and links to outcomes, that are summarized in the article and included in more detail in the online supplemental material.

If implemented in practice, these 5 practices suggested by Zulman and colleagues are likely to enhance patient-physician relationships, which ideally could help improve physician satisfaction and well-being, reduce physician frustration, improve clinical outcomes, and reduce health care costs.

Importantly, the authors also call for system-level interventions to create an environment for the implementation of these practices.

Although the patient-physician interaction is at the core of most physicians’ activities and has led to an entire genre of literature and television programs, very little is actually known about what makes for an effective relationship.

The info is here.

Friday, January 31, 2020

Strength of conviction won’t help to persuade when people disagree

Brain areaPressor
ucl.ac.uk
Originally poste 16 Dec 19

The brain scanning study, published in Nature Neuroscience, reveals a new type of confirmation bias that can make it very difficult to alter people’s opinions.

“We found that when people disagree, their brains fail to encode the quality of the other person’s opinion, giving them less reason to change their mind,” said the study’s senior author, Professor Tali Sharot (UCL Psychology & Language Sciences).

For the study, the researchers asked 42 participants, split into pairs, to estimate house prices. They each wagered on whether the asking price would be more or less than a set amount, depending on how confident they were. Next, each lay in an MRI scanner with the two scanners divided by a glass wall. On their screens they were shown the properties again, reminded of their own judgements, then shown their partner’s assessment and wagers, and finally were asked to submit a final wager.

The researchers found that, when both participants agreed, people would increase their final wagers to larger amounts, particularly if their partner had placed a high wager.

Conversely, when the partners disagreed, the opinion of the disagreeing partner had little impact on people’s wagers, even if the disagreeing partner had placed a high wager.

The researchers found that one brain area, the posterior medial prefrontal cortex (pMFC), was involved in incorporating another person’s beliefs into one’s own. Brain activity differed depending on the strength of the partner’s wager, but only when they were already in agreement. When the partners disagreed, there was no relationship between the partner’s wager and brain activity in the pMFC region.

The info is here.

Thursday, January 23, 2020

You Are Already Having Sex With Robots

Henry the sex robotEmma Grey Ellis
wired.com
Originally published 23 Aug 19

Here are two excerpts:

Carnegie Mellon roboticist Hans Moravec has written about emotions as devices for channeling behavior in helpful ways—for example, sexuality prompting procreation. He concluded that artificial intelligences, in seeking to please humanity, are likely to be highly emotional. By this definition, if you encoded an artificial intelligence with the need to please humanity sexually, their urgency to follow their programming constitutes sexual feelings. Feelings as real and valid as our own. Feelings that lead to the thing that feelings, probably, evolved to lead to: sex. One gets the sense that, for some digisexual people, removing the squishiness of the in-between stuff—the jealousy and hurt and betrayal and exploitation—improves their sexual enjoyment. No complications. The robot as ultimate partner. An outcome of evolution.

So the sexbotcalypse will come. It's not scary, it's just weird, and it's being motivated by millennia-old bad habits. Laziness, yes, but also something else. “I don’t see anything that suggests we’re going to buck stereotypes,” says Charles Ess, who studies virtue ethics and social robots at the University of Oslo. “People aren’t doing this out of the goodness of their hearts. They’re doing this to make money.”

(cut)

Technologizing sexual relationships will also fill one of the last blank spots in tech’s knowledge of (ad-targetable) human habits. Brianna Rader—founder of Juicebox, progenitor of Slutbot—has spoken about how difficult it is to do market research on sex. If having sex with robots or other forms of sex tech becomes commonplace, it wouldn’t be difficult anymore. “We have an interesting relationship with privacy in the US,” Kaufman says. “We’re willing to trade a lot of our privacy and information away for pleasures less complicated than an intimate relationship.”

The info is here.

Saturday, January 11, 2020

A Semblance of Aliveness

J. Grunsven & A. Wynsberghe
Techné: Research in Philosophy and Technology
Published on December 3, 2019

While the design of sex robots is still in the early stages, the social implications of the potential proliferation of sex robots into our lives has been heavily debated by activists and scholars from various disciplines. What is missing in the current debate on sex robots and their potential impact on human social relations is a targeted look at the boundedness and bodily expressivity typically characteristic of humans, the role that these dimensions of human embodiment play in enabling reciprocal human interactions, and the manner in which this contrasts with sex robot-human interactions. Through a fine-grained discussion of these themes, rooted in fruitful but largely untapped resources from the field of enactive embodied cognition, we explore the unique embodiment of sex robots. We argue that the embodiment of the sex robot is constituted by what we term restricted expressivity and a lack of bodily boundedness and that this is the locus of negative but also potentially positive implications. We discuss the possible benefits that these two dimensions of embodiment may have for people within a specific demographic, namely some persons on the autism spectrum. Our preliminary conclusion—that the benefits and the downsides of sex robots reside in the same capability of the robot, its restricted expressivity and lack of bodily boundedness as we call it—demands we take stock of future developments in the design of sex robot embodiment. Given the importance of evidence-based research pertaining to sex robots in particular, as reinforced by Nature (2017) for drawing correlations and making claims, the analysis is intended to set the stage for future research.

The info is here.