Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Relationships. Show all posts
Showing posts with label Relationships. Show all posts

Monday, January 6, 2020

The Majority Does Not Determine Morality

Michael Brown
Townhall.com
Originally posted 9 Dec 19

Here is an excerpt:

During the time period from 2003 to 2017, support for polygamy in America rose from 7 percent to 17 percent, an even more dramatic shift from a statistical point of view. And it’s up to 18 percent in 2019.

Gallup noted that this “may simply be the result of the broader leftward shift on moral issues Americans have exhibited in recent years. Or, as conservative columnist Ross Douthat notes in his New York Times blog, ‘Polygamy is bobbing forward in social liberalism's wake ...’ To Douthat and other social conservatives, warming attitudes toward polygamy is a logical consequence of changing social norms -- that values underpinning social liberalism offer ‘no compelling grounds for limiting the number of people who might wish to marry.’”

Gallup also observed that, “It is certainly true that moral perceptions have significantly, fundamentally changed on a number of social issues or behaviors since 2001 -- most notably, gay/lesbian relations, having a baby outside of wedlock, sex between unmarried men and women, and divorce.”

Interestingly, Gallup also noted that there were social reasons that help to explain some of this larger leftward shift (including the rise in divorce and changes in laws; another obvious reason is that people have friends and family members who identify as gay or lesbian).

The info is here.

Sunday, December 29, 2019

It Loves Me, It Loves Me Not Is It Morally Problematic to Design Sex Robots that Appear to Love Their Owners?

Sven Nyholm and Lily Eva Frank
Techné: Research in Philosophy and Technology
DOI: 10.5840/techne2019122110

Abstract

Drawing on insights from robotics, psychology, and human-computer interaction, developers of sex robots are currently aiming to create emotional bonds of attachment and even love between human users and their products. This is done by creating robots that can exhibit a range of facial expressions, that are made with human-like artificial skin, and that possess a rich vocabulary with many conversational possibilities. In light of the human tendency to anthropomorphize artifacts, we can expect that designers will have some success and that this will lead to the attribution of mental states to the robot that the robot does not actually have, as well as the inducement of significant emotional responses in the user. This raises the question of whether it might be ethically problematic to try to develop robots that appear to love their users. We discuss three possible ethical concerns about this aim: first, that designers may be taking advantage of users’ emotional vulnerability; second, that users may be deceived; and, third, that relationships with robots may block off the possibility of more meaningful relationships with other humans. We argue that developers should attend to the ethical constraints suggested by these concerns in their development of increasingly humanoid sex robots. We discuss two different ways in which they might do so.

Wednesday, October 30, 2019

Punish or Protect? How Close Relationships Shape Responses to Moral Violations

Weidman, A. C., Sowden, W. J., Berg, M. K.,
& Kross, E. (2019).
Personality and Social Psychology Bulletin.
https://doi.org/10.1177/0146167219873485

Abstract

People have fundamental tendencies to punish immoral actors and treat close others altruistically. What happens when these tendencies collide—do people punish or protect close others who behave immorally? Across 10 studies (N = 2,847), we show that people consistently anticipate protecting close others who commit moral infractions, particularly highly severe acts of theft and sexual harassment. This tendency emerged regardless of gender, political orientation, moral foundations, and disgust sensitivity and was driven by concerns about self-interest, loyalty, and harm. We further find that people justify this tendency by planning to discipline close others on their own. We also identify a psychological mechanism that mitigates the tendency to protect close others who have committed severe (but not mild) moral infractions: self-distancing. These findings highlight the role that relational closeness plays in shaping people’s responses to moral violations, underscoring the need to consider relational closeness in future moral psychology work.

From the General Discussion

These findings also clarify the mechanisms through which people reconcile behaving loyally (by protecting close others who commit moral infractions) at the cost of behaving dishonestly while allowing an immoral actor to evade formal punishment (by lying to a police officer). It does not appear that people view close others’ moral infractions as less immoral: A brother’s heinous crime is still a heinous crime.  Instead, when people observe close others behaving immorally, we found through an exploratory linguistic coding analysis that they overwhelmingly intend to enact a lenient form of punishment by confronting the perpetrator to discuss the act. We suspect that doing so allows a person to simultaneously (a) maintain their self-image as a morally upstanding individual and (b) preserve and even enhance the close relationship, in line with the finding in Studies 1d and 1e that protecting close others from legal fallout is viewed as an act of self-interest. These tactics are also broadly consistent with prior work suggesting that people often justify their own immoral acts by focusing on positive consequences of the act or reaffirming their own moral standing (Bandura, 2016). In contrast, we found that when people observe distant others behaving immorally, they report greater intentions to subject these individuals to external, formal means of punishment, such as turning them in to law enforcement or subjecting them to social ostracization.

Sunday, June 2, 2019

Promoting competent and flourishing life-long practice for psychologists: A communitarian perspective

Wise, E. H., & Reuman, L. (2019).
Professional Psychology: Research and Practice, 50(2), 129-135.

Abstract

Based on awareness of the challenges inherent in the practice of psychology there is a burgeoning interest in ensuring that psychologists who serve the public remain competent. These challenges include remaining current in our technical skills and maintaining sufficient personal wellness over the course of our careers. However, beyond merely maintaining competence, we encourage psychologists to envision flourishing lifelong practice that incorporates positive relationships, enhancement of meaning, and positive engagement. In this article we provide an overview of the foundational competencies related to professionalism including ethics, reflective practice, self-assessment, and self-care that underlie our ability to effectively apply technical skills in often complex and emotionally challenging relational contexts. Building on these foundational competencies that were initially defined and promulgated for academic training in health service psychology, we provide an initial framework for conceptualizing psychologist well-being and flourishing lifelong practice that incorporates tenets of applied positive psychology, values-based practice, and a communitarian-oriented approach into the following categories: fostering relationships, meaning making and value-based practice, and enhancing engagement. Finally, we propose broad strategies and specific examples intended to leverage current continuing education mandates into a broadly conceived vision of continuing professional development to support enhanced psychologist functioning for lifelong practice.

The info is here.

Thursday, April 25, 2019

The New Science of How to Argue—Constructively

Jesse Singal
The Atlantic
Originally published April 7, 2019

Here is an excerpt:

Once you know a term like decoupling, you can identify instances in which a disagreement isn’t really about X anymore, but about Y and Z. When some readers first raised doubts about a now-discredited Rolling Stone story describing a horrific gang rape at the University of Virginia, they noted inconsistencies in the narrative. Others insisted that such commentary fit into destructive tropes about women fabricating rape claims, and therefore should be rejected on its face. The two sides weren’t really talking; one was debating whether the story was a hoax, while the other was responding to the broader issue of whether rape allegations are taken seriously. Likewise, when scientists bring forth solid evidence that sexual orientation is innate, or close to it, conservatives have lashed out against findings that would “normalize” homosexuality. But the dispute over which sexual acts, if any, society should discourage is totally separate from the question of whether sexual orientation is, in fact, inborn. Because of a failure to decouple, people respond indignantly to factual claims when they’re actually upset about how those claims might be interpreted.

Nerst believes that the world can be divided roughly into “high decouplers,” for whom decoupling comes easy, and “low decouplers,” for whom it does not. This is the sort of area where erisology could produce empirical insights: What characterizes people’s ability to decouple? Nerst believes that hard-science types are better at it, on average, while artistic types are worse. After all, part of being an artist is seeing connections where other people don’t—so maybe it’s harder for them to not see connections in some cases. Nerst might be wrong. Either way, it’s the sort of claim that could be fairly easily tested if the discipline caught on.

The info is here.

The Brave New World of Sex Robots

Mark Wolverton
undark.org
Originally posted March 29, 2019

Here is an excerpt:

But as the technology develops apace, so are a host of other issues, including political and social ones (Why such emphasis on feminine bots rather than male? Do sexbots really need a “gender” at all?); philosophical and ethical ones (Is sex with a robot really “sex”? What if the robots are sentient?); and legal ones (Does sex with a robot count as cheating on your human partner?)

Many of these concerns overlap with present controversies regarding AI in general, but in this realm, tied so closely with the most profound manifestations of human intimacy, they feel more personal and controversial. Perhaps as a result, Devlin has a self-admitted tendency at times to slip into somewhat heavy-handed feminist polemics, which can overshadow or obscure possible alternative interpretations to some questions — it’s arguable whether the “Blade Runner” films have “a woman problem,” for example, or whether the prevalence of sexbots with idealized and identifiably feminine aesthetics is solely a result of “male objectification.”

Informed by her background as a computer scientist, Devlin provides excellent nuts-and-bolts technical explanations of the fundamentals of machine learning, neural networks, and language processing that provide the necessary foundation for her explorations of the subject, whose sometimes sensitive nature is eased by her sly sense of humor.

The info is here.

Tuesday, March 19, 2019

We're Teaching Consent All Wrong

Sarah Sparks
www.edweek.org
Originally published January 8, 2019

Here is an excerpt:

Instead, researchers and educators offer an alternative: Teach consent as a life skill—not just a sex skill—beginning in early childhood, and begin discussing consent and communication in the context of relationships by 5th or 6th grades, before kids start seriously thinking about sex. (Think that's too young? In yet another study, the CDC found 8 in 10 teenagers didn't get sex education until after they'd already had sex.)

Educators and parents often balk at discussing strategies for and examples of consent because "they incorrectly believe that if you teach consent, students will become more sexually active," said Mike Domitrz, founder of the Date Safe Project, a Milwaukee-based sexual-assault prevention program that focuses on consent education and bystander interventions. "It's a myth. Students of both genders are pretty consistent that a lot of the sexual activity that is going on is occurring under pressure."

Studies suggest young women are more likely to judge consent on verbal communication and young men relied more on nonverbal cues, though both groups said nonverbal signals are often misinterpreted. And teenagers can be particularly bad at making decisions about risky behavior, including sexual situations, while under social pressure. Brain studies have found adolescents are more likely to take risks and less likely to think about negative consequences when they are in emotionally arousing, or "hot," situations, and that bad decision-making tends to get even worse when they feel they are being judged by their friends.

Making understanding and negotiating consent a life skill gives children and adolescents ways to understand and respect both their own desires and those of other people. And it can help educators frame instruction about consent without sinking into the morass of long-running arguments and anxiety over gender roles, cultural values, and teen sexuality.

The info is here.

Tuesday, March 12, 2019

Sex robots are here, but laws aren’t keeping up with the ethical and privacy issues they raise

Francis Shen
The Conversation
Originally published February 12, 2019

Here is an except:

A brave new world

A fascinating question for me is how the current taboo on sex robots will ebb and flow over time.

There was a time, not so long ago, when humans attracted to the same sex felt embarrassed to make this public. Today, society is similarly ambivalent about the ethics of “digisexuality” – a phrase used to describe a number of human-technology intimate relationships. Will there be a time, not so far in the future, when humans attracted to robots will gladly announce their relationship with a machine?

No one knows the answer to this question. But I do know that sex robots are likely to be in the American market soon, and it is important to prepare for that reality. Imagining the laws governing sexbots is no longer a law professor hypothetical or science fiction.

The info is here.

Monday, December 24, 2018

Your Intuition Is Wrong, Unless These 3 Conditions Are Met

Emily Zulz
www.thinkadvisor.com
Originally posted November 16, 2018

Here is an excerpt:

“Intuitions of master chess players when they look at the board [and make a move], they’re accurate,” he said. “Everybody who’s been married could guess their wife’s or their husband’s mood by one word on the telephone. That’s an intuition and it’s generally very good, and very accurate.”

According to Kahneman, who’s studied when one can trust intuition and when one cannot, there are three conditions that need to be met in order to trust one’s intuition.

The first is that there has to be some regularity in the world that someone can pick up and learn.

“So, chess players certainly have it. Married people certainly have it,” Kahnemen explained.

However, he added, people who pick stocks in the stock market do not have it.

“Because, the stock market is not sufficiently regular to support developing that kind of expert intuition,” he explained.

The second condition for accurate intuition is “a lot of practice,” according to Kahneman.

And the third condition is immediate feedback. Kahneman said that “you have to know almost immediately whether you got it right or got it wrong.”

The info is here.

Saturday, December 1, 2018

Building trust by tearing others down: When accusing others of unethical behavior engenders trust

Jessica A. Kennedy, Maurice E. Schweitzer.
Organizational Behavior and Human Decision Processes
Volume 149, November 2018, Pages 111-128

Abstract

We demonstrate that accusations harm trust in targets, but boost trust in the accuser when the accusation signals that the accuser has high integrity. Compared to individuals who did not accuse targets of engaging in unethical behavior, accusers engendered greater trust when observers perceived the accusation to be motivated by a desire to defend moral norms, rather than by a desire to advance ulterior motives. We also found that the accuser’s moral hypocrisy, the accusation's revealed veracity, and the target’s intentions when committing the unethical act moderate the trust benefits conferred to accusers. Taken together, we find that accusations have important interpersonal consequences.

Highlights

•    Accusing others of unethical behavior can engender greater trust in an accuser.
•    Accusations can elevate trust by boosting perceptions of accusers’ integrity.
•    Accusations fail to build trust when they are perceived to reflect ulterior motives.
•    Morally hypocritical accusers and false accusations fail to build trust.
•    Accusations harm trust in the target.

The research is here.

Monday, November 19, 2018

Why Facts Don’t Change Our Minds

James Clear
www.jamesclear.com
Undated

Facts Don't Change Our Minds. Friendship Does.

Convincing someone to change their mind is really the process of convincing them to change their tribe. If they abandon their beliefs, they run the risk of losing social ties. You can’t expect someone to change their mind if you take away their community too. You have to give them somewhere to go. Nobody wants their worldview torn apart if loneliness is the outcome.

The way to change people’s minds is to become friends with them, to integrate them into your tribe, to bring them into your circle. Now, they can change their beliefs without the risk of being abandoned socially.

The British philosopher Alain de Botton suggests that we simply share meals with those who disagree with us:
“Sitting down at a table with a group of strangers has the incomparable and odd benefit of making it a little more difficult to hate them with impunity. Prejudice and ethnic strife feed off abstraction. However, the proximity required by a meal – something about handing dishes around, unfurling napkins at the same moment, even asking a stranger to pass the salt – disrupts our ability to cling to the belief that the outsiders who wear unusual clothes and speak in distinctive accents deserve to be sent home or assaulted. For all the large-scale political solutions which have been proposed to salve ethnic conflict, there are few more effective ways to promote tolerance between suspicious neighbours than to force them to eat supper together.” 
Perhaps it is not difference, but distance that breeds tribalism and hostility. As proximity increases, so does understanding. I am reminded of Abraham Lincoln's quote, “I don't like that man. I must get to know him better.”

Facts don't change our minds. Friendship does.

Wednesday, November 14, 2018

Keeping Human Stories at the Center of Health Care

M. Bridget Duffy
Harvard Business Review
Originally published October 8, 2018

Here is an excerpt:

A mentor told me early in my career that only 20% of healing involves the high-tech stuff. The remaining 80%, he said, is about the relationships we build with patients, the physical environments we create, and the resources we provide that enable patients to tap into whatever they need for spiritual sustenance. The longer I work in health care, the more I realize just how right he was.

How do we get back to the 80-20 rule? By placing the well-being of patients and care teams at the top of the list for every initiative we undertake and every technology we introduce. Rather than just introducing technology with no thought as to its impact on clinicians — as happened with many rollouts of electronic medical records (EMRs) — we need to establish a way to quantifiably measure whether a new technology actually improves a clinician’s workday and ability to deliver care or simply creates hassles and inefficiency. Let’s develop an up-front “technology ROI” that measures workflow impact, inefficiency, hassle and impact on physician and nurse well-being.

The National Taskforce for Humanity in Healthcare, of which I am a founding member, is piloting a system of metrics for well-being developed by J. Bryan Sexton of Duke University Medical Center. Instead of measuring burnout or how broken health care people are, Dr. Sexton’s metrics focus on emotional thriving and emotional resilience. (The former are how strongly people agree or disagree to these statements: “I have a chance to use my strengths every day at work,” “I feel like I am thriving at my job,” “I feel like I am making a meaningful difference at my job,” and “I often have something that I am very looking forward to at my job.”

The info is here.

Wednesday, August 29, 2018

How Sex Robots Could Revolutionize Marriage—for the Better

Marina Adshade
slate.com
Originally posted August 14, 2018

Here is an excerpt:

The question then is: What happens to marriage when sexbot technology provides a low-cost alternative to easy sexual access in marriage? One possibility is a reversal of the past century of societal change, which tied together marriage and sexual intimacy, and a return to the perception of marriage as a productive household unit.


Those who fear that sexbot technology will have a negative impact on marriage rates see sexbot technology as a substitute to sexual access in marriage. If they are correct, a decrease in the price of sexual access outside of marriage will decrease the demand for sexual access in marriage, and marriage rates will fall. It could just as easily be argued, however, that within marriage sexual access and household production are complements in consumption—in other words, goods or services that are often consumed together, like tea and sugar, or cellular data and phone apps. If that is the case, then, consumer theory predicts that easy access to sexbot technology will actually increase the rate of lifetime marriage, since a fall in the price of a good increases the demand for complements in consumption, just as a fall in the price of cellular data would likely increase demand for phone streaming services. Moreover, if sexual access through sexbot technology is a complement to household production, then we could observe an increase in the quality of marriages and, as a result, a reduction in rates of divorce.

The info is here.

Saturday, August 11, 2018

Should we care that the sex robots are coming?

Kate Devlin
unhurd.com
Originally published July 12, 2018

Here is an excerpt:

There’s no evidence to suggest that human-human relationships will be damaged. Indeed, it may be a chance for people to experience feelings of love that they are otherwise denied, for any number of reasons. Whether or not that love is considered valid by society is a different matter. And while objectification is definitely an issue, it may be an avoidable one. Security and privacy breaches are a worry in any smart technologies, which puts a whole new spin on safe sex.

As for child sex robots – an abhorrent image – people have already been convicted for importing child-like sex dolls. But we shouldn’t shy from considering whether research might deem them useful in a clinical setting, such as testing rehabilitation success, as has been trialled with virtual reality.

While non-sexual care robots are already in use, it was only three months ago, that the race to produce the first commercially-available model was won by an lifeless sex doll with an animatronic head and an integrated AI chatbot called Harmony. She might look the part but she doesn’t move from the neck down. We are still a long way from Westworld.

Naturally, a niche market will be delighted at the prospect of bespoke robot pleasure to come. But many others are worried about the impact these machines will have on our own, human relationships. These concerns aren’t dispelled by the fact that the current form of the sex robot is a reductive, cartoonish stereotype of a woman: all big hair and bigger breasts.

The info is here.

Friday, July 20, 2018

The Psychology of Offering an Apology: Understanding the Barriers to Apologizing and How to Overcome Them

Karina Schumann
Current Directions in Psychological Science 
Vol 27, Issue 2, pp. 74 - 78
First Published March 8, 2018

Abstract

After committing an offense, a transgressor faces an important decision regarding whether and how to apologize to the person who was harmed. The actions he or she chooses to take after committing an offense can have dramatic implications for the victim, the transgressor, and their relationship. Although high-quality apologies are extremely effective at promoting reconciliation, transgressors often choose to offer a perfunctory apology, withhold an apology, or respond defensively to the victim. Why might this be? In this article, I propose three major barriers to offering high-quality apologies: (a) low concern for the victim or relationship, (b) perceived threat to the transgressor’s self-image, and (c) perceived apology ineffectiveness. I review recent research examining how these barriers affect transgressors’ apology behavior and describe insights this emerging work provides for developing methods to move transgressors toward more reparative behavior. Finally, I discuss important directions for future research.

The article is here.

Sunday, July 8, 2018

A Son’s Race to Give His Dying Father Artificial Immortality

James Vlahos
wired.com
Originally posted July 18, 2017

Here is an excerpt:

I dream of creating a Dadbot—a chatbot that emulates not a children’s toy but the very real man who is my father. And I have already begun gathering the raw material: those 91,970 words that are destined for my bookshelf.

The thought feels impossible to ignore, even as it grows beyond what is plausible or even advisable. Right around this time I come across an article online, which, if I were more superstitious, would strike me as a coded message from forces unseen. The article is about a curious project conducted by two researchers at Google. The researchers feed 26 million lines of movie dialog into a neural network and then build a chatbot that can draw from that corpus of human speech using probabilistic machine logic. The researchers then test the bot with a bunch of big philosophical questions.

“What is the purpose of living?” they ask one day.

The chatbot’s answer hits me as if it were a personal challenge.

“To live forever,” it says.

The article is here.

Yes, I saw the Black Mirror episode using a similar theme.

Friday, June 29, 2018

The Surprising Power of Questions

Alison Wood Brooks and Leslie K. John
Harvard Business Review
May-June 2018 Issue

Here are two excerpts:

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.

The New Socratic Method

The first step in becoming a better questioner is simply to ask more questions. Of course, the sheer number of questions is not the only factor that influences the quality of a conversation: The type, tone, sequence, and framing also matter.

(cut)

Not all questions are created equal. Alison’s research, using human coding and machine learning, revealed four types of questions: introductory questions (“How are you?”), mirror questions (“I’m fine. How are you?”), full-switch questions (ones that change the topic entirely), and follow-up questions (ones that solicit more information). Although each type is abundant in natural conversation, follow-up questions seem to have special power. They signal to your conversation partner that you are listening, care, and want to know more. People interacting with a partner who asks lots of follow-up questions tend to feel respected and heard.

An unexpected benefit of follow-up questions is that they don’t require much thought or preparation—indeed, they seem to come naturally to interlocutors. In Alison’s studies, the people who were told to ask more questions used more follow-up questions than any other type without being instructed to do so.

The article is here.

This article clearly relates to psychotherapy communication.

Thursday, June 14, 2018

Sex robots are coming. We might even fall in love with them.

Sean Illing
www.vox.com
Originally published May 11, 2018

Here is an excerpt:

Sean Illing: Your essay poses an interesting question: Is mutual love with a robot possible? What’s the answer?

Lily Eva Frank:

Our essay tried to explore some of the core elements of romantic love that people find desirable, like the idea of being a perfect match for someone or the idea that we should treasure the little traits that make someone unique, even those annoying flaws or imperfections.

The key thing is that we love someone because there’s something about being with them that matters, something particular to them that no one else has. And we make a commitment to that person that holds even when they change, like aging, for example.

Could a robot do all these things? Our answer is, in theory, yes. But only a very advanced form of artificial intelligence could manage it because it would have to do more than just perform as if it were a person doing the loving. The robot would have to have feelings and internal experiences. You might even say that it would have to be self-aware.

But that would leave open the possibility that the sex bot might not want to have sex with you, which sort of defeats the purpose of developing these technologies in the first place.

(cut)

I think people are weird enough that it is probably possible for them to fall in love with a cat or a dog or a machine that doesn’t reciprocate the feelings. A few outspoken proponents of sex dolls and robots claim they love them. Check out the testimonials page on the websites of sex doll manufactures; they say things like, “Three years later, I love her as much as the first day I met her.” I don’t want to dismiss these people’s reports.

The information is here.

Friday, June 8, 2018

The pros and cons of having sex with robots

Karen Turner
www.vox.com
Originally posted January 18, 2018

Here is an excerpt:

Karen Turner: Where does sex robot technology stand right now?

Neil McArthur:

When people have this idea of a sex robot, they think it’s going to look like a human being, it’s gonna walk around and say seductive things and so on. I think that’s actually the slowest-developing part of this whole nexus of sexual technology. It will come — we are going to have realistic sex robots. But there are a few technical hurdles to creating humanoid robots that are proving fairly stubborn. Making them walk is one of them. And if you use Siri or any of those others, you know that AI is proving sort of stubbornly resistant to becoming realistic.

But I think that when you look more broadly at what’s happening with sexual technology, virtual reality in general has just taken off. And it’s being used in conjunction with something called teledildonics, which is kind of an odd term. But all it means is actual devices that you hook up to yourself in various ways that sync with things that you see onscreen. It’s truly amazing what’s going on.

(cut)

When you look at the ethical or philosophical considerations, — I think there’s two strands. One is the concerns people have, and two, which I think maybe doesn’t get as much attention, in the media at least, is the potential advantages.

The concerns have to do with the psychological impact. As you saw with those Apple shareholders [who asked Apple to help protect children from digital addiction], we’re seeing a lot of concern about the impact that technology is having on people’s lives right now. Many people feel that anytime you’re dealing with sexual technology, those sorts of negative impacts really become intensified — specifically, social isolation, people cutting themselves off from the world.

The article is here.

Thursday, June 7, 2018

Embracing the robot

John Danaher
aeon.co
Originally posted March 19, 2018

Here is an excerpt:

Contrary to the critics, I believe our popular discourse about robotic relationships has become too dark and dystopian. We overstate the negatives and overlook the ways in which relationships with robots could complement and enhance existing human relationships.

In Blade Runner 2049, the true significance of K’s relationship with Joi is ambiguous. It seems that they really care for each other, but this could be an illusion. She is, after all, programmed to serve his needs. The relationship is an inherently asymmetrical one. He owns and controls her; she would not survive without his good will. Furthermore, there is a third-party lurking in the background: she has been designed and created by a corporation, which no doubt records the data from her interactions, and updates her software from time to time.

This is a far cry from the philosophical ideal of love. Philosophers emphasise the need for mutual commitment in any meaningful relationship. It’s not enough for you to feel a strong, emotional attachment to another; they have to feel a similar attachment to you. Robots might be able to perform love, saying and doing all the right things, but performance is insufficient.

The information is here.