Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Chatbot. Show all posts
Showing posts with label Chatbot. Show all posts

Thursday, January 23, 2020

You Are Already Having Sex With Robots

Henry the sex robotEmma Grey Ellis
wired.com
Originally published 23 Aug 19

Here are two excerpts:

Carnegie Mellon roboticist Hans Moravec has written about emotions as devices for channeling behavior in helpful ways—for example, sexuality prompting procreation. He concluded that artificial intelligences, in seeking to please humanity, are likely to be highly emotional. By this definition, if you encoded an artificial intelligence with the need to please humanity sexually, their urgency to follow their programming constitutes sexual feelings. Feelings as real and valid as our own. Feelings that lead to the thing that feelings, probably, evolved to lead to: sex. One gets the sense that, for some digisexual people, removing the squishiness of the in-between stuff—the jealousy and hurt and betrayal and exploitation—improves their sexual enjoyment. No complications. The robot as ultimate partner. An outcome of evolution.

So the sexbotcalypse will come. It's not scary, it's just weird, and it's being motivated by millennia-old bad habits. Laziness, yes, but also something else. “I don’t see anything that suggests we’re going to buck stereotypes,” says Charles Ess, who studies virtue ethics and social robots at the University of Oslo. “People aren’t doing this out of the goodness of their hearts. They’re doing this to make money.”

(cut)

Technologizing sexual relationships will also fill one of the last blank spots in tech’s knowledge of (ad-targetable) human habits. Brianna Rader—founder of Juicebox, progenitor of Slutbot—has spoken about how difficult it is to do market research on sex. If having sex with robots or other forms of sex tech becomes commonplace, it wouldn’t be difficult anymore. “We have an interesting relationship with privacy in the US,” Kaufman says. “We’re willing to trade a lot of our privacy and information away for pleasures less complicated than an intimate relationship.”

The info is here.

Thursday, September 13, 2018

Meet the Chatbots Providing Mental Health Care

Daniela Hernandez
Wall Street Journal
Originally published Aug. 9, 2018

Here is an excerpt:

Wysa Ltd., a London- and Bangalore-based startup, is testing a free chatbot to teach adolescents emotional resilience, said co-founder Ramakant Vempati.  In the app, a chubby penguin named Wysa helps users evaluate the sources of their stress and provides tips on how to stay positive, like thinking of a loved one or spending time outside.  The company said its 400,000 users, most of whom are under 35, have had more than 20 million conversations with the bot.

Wysa is a wellness app, not a medical intervention, Vempati said, but it relies on cognitive behavioral therapy, mindfulness techniques and meditations that are “known to work in a self-help context.”  If a user expresses thoughts of self-harm, Wysa reminds them that it’s just a bot and provides contact information for crisis hotlines.  Alternatively, for $30 a month, users can access unlimited chat sessions with a human “coach.”  Other therapy apps, such as Talkspace, offer similar low-cost services with licensed professionals.

Chatbots have potential, said Beth Jaworski, a mobile apps specialist at the National Center for PTSD in Menlo Park, Calif.  But definitive research on whether they can help patients with more serious conditions, like major depression, still hasn’t been done, in part because the technology is so new, she said.  Clinicians also worry about privacy.  Mental health information is sensitive data; turning it over to companies could have unforeseen consequences.

The article is here.

Sunday, July 8, 2018

A Son’s Race to Give His Dying Father Artificial Immortality

James Vlahos
wired.com
Originally posted July 18, 2017

Here is an excerpt:

I dream of creating a Dadbot—a chatbot that emulates not a children’s toy but the very real man who is my father. And I have already begun gathering the raw material: those 91,970 words that are destined for my bookshelf.

The thought feels impossible to ignore, even as it grows beyond what is plausible or even advisable. Right around this time I come across an article online, which, if I were more superstitious, would strike me as a coded message from forces unseen. The article is about a curious project conducted by two researchers at Google. The researchers feed 26 million lines of movie dialog into a neural network and then build a chatbot that can draw from that corpus of human speech using probabilistic machine logic. The researchers then test the bot with a bunch of big philosophical questions.

“What is the purpose of living?” they ask one day.

The chatbot’s answer hits me as if it were a personal challenge.

“To live forever,” it says.

The article is here.

Yes, I saw the Black Mirror episode using a similar theme.

Thursday, April 26, 2018

Rogue chatbots deleted in China after questioning Communist Party

Neil Connor
The Telegraph
Originally published August 3, 2017

Two chatbots have been pulled from a Chinese messaging app after they questioned the rule of the Communist Party and made unpatriotic comments.

The bots were available on a messaging app run by Chinese Internet giant Tencent, which has more than 800 million users, before apparently going rogue.

One of the robots, BabyQ, was asked “Do you love the Communist Party”, according to a screenshot posted on Sina Weibo, China’s version of Twitter.

Another web user said to the chatbot: “Long Live the Communist Party”, to which BabyQ replied: “Do you think such corrupt and incapable politics can last a long time?”

(cut)

The Chinese Internet is heavily censored by Beijing, which sees any criticism of its rule as a threat.

Social media posts which are deemed critical are often quickly deleted by authorities, while searches for sensitive topics are often blocked.

The information is here.

Friday, June 30, 2017

Ethics and Artificial Intelligence With IBM Watson's Rob High

Blake Morgan
Forbes.com
Originally posted June 12, 2017

Artificial intelligence seems to be popping up everywhere, and it has the potential to change nearly everything we know about data and the customer experience. However, it also brings up new issues regarding ethics and privacy.

One of the keys to keeping AI ethical is for it to be transparent, says Rob High, vice president and chief technology officer of IBM Watson. When customers interact with a chatbot, for example, they need to know they are communicating with a machine and not an actual human. AI, like most other technology tools, is most effective when it is used to extend the natural capabilities of humans instead of replacing them. That means that AI and humans are best when they work together and can trust each other.

Chatbots are one of the most commonly used forms of AI. Although they can be used successfully in many ways, there is still a lot of room for growth. As they currently stand, chatbots mostly perform basic actions like turning on lights, providing directions, and answering simple questions that a person asks directly. However, in the future, chatbots should and will be able to go deeper to find the root of the problem. For example, a person asking a chatbot what her bank balance is might be asking the question because she wants to invest money or make a big purchase—a futuristic chatbot could find the real reason she is asking and turn it into a more developed conversation. In order to do that, chatbots will need to ask more questions and drill deeper, and humans need to feel comfortable providing their information to machines.

The article is here.