Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Robot. Show all posts
Showing posts with label Robot. Show all posts

Tuesday, May 2, 2017

Would You Become An Immortal Machine?

Marcelo Gleiser
npr.org
Originally posted March 27, 2017

Here is an excerpt:

"A man is a god in ruins," wrote Ralph Waldo Emerson. This quote, which O'Connell places at the book's opening page, captures the essence of the quest. If man is a failed god, there may be a way to fix this. Since "The Fall," we "lost" our god-like immortality, and have been looking for ways to regain it. Can science do this? Is mortality merely a scientific question? Suppose that it is — and that we can fix it, as we can a headache. Would you pay the price by transferring your "essence" to a non-human entity that will hold it, be it silicone or some kind of artificial robot? Can you be you when you don't have your body? Are you really just transferrable information?

As O'Connell meets an extraordinary group of people, from serious scientists and philosophers to wackos, he keeps asking himself this question, knowing fully well his answer: Absolutely not! What makes us human is precisely our fallibility, our connection to our bodies, the existential threat of death. Remove that and we are a huge question mark, something we can't even contemplate. No thanks, says O'Connell, in a deliciously satiric style, at once lyrical, informative, and captivating.

The article is here.

Sunday, June 1, 2014

The Ethics of Automated Cars

By Patrick Lin
Wired Magazine
Originally published May 6, 2014

Here is an except:

Programming a car to collide with any particular kind of object over another seems an awful lot like a targeting algorithm, similar to those for military weapons systems. And this takes the robot-car industry down legally and morally dangerous paths.

Even if the harm is unintended, some crash-optimization algorithms for robot cars would seem to require the deliberate and systematic discrimination of, say, large vehicles to collide into. The owners or operators of these targeted vehicles would bear this burden through no fault of their own, other than that they care about safety or need an SUV to transport a large family. Does that sound fair?

What seemed to be a sensible programming design, then, runs into ethical challenges. Volvo and other SUV owners may have a legitimate grievance against the manufacturer of robot cars that favor crashing into them over smaller cars, even if physics tells us this is for the best.

The entire story is here.

Friday, May 30, 2014

Now The Military Is Going To Build Robots That Have Morals

By Patrick Tucker
Defense One
Originally posted May 13, 2014

Are robots capable of moral or ethical reasoning? It’s no longer just a question for tenured philosophy professors or Hollywood directors. This week, it’s a question being put to the United Nations.

The Office of Naval Research will award $7.5 million in grant money over five years to university researchers from Tufts, Rensselaer Polytechnic Institute, Brown, Yale and Georgetown to explore how to build a sense of right and wrong and moral consequence into autonomous robotic systems.

The entire article is here.

Thursday, March 20, 2014

The Philosophy of ‘Her’

By Susan Schneider
The New York Times
Originally published March 2, 2014

Here is an excerpt:

“Her” raises two questions that have long preoccupied philosophers. Are nonbiological creatures like Samantha capable of consciousness — at least in theory, if not yet in practice? And if so, does that mean that we humans might one day be able to upload our own minds to computers, perhaps to join Samantha in being untethered from “a body that’s inevitably going to die”?

(cut)

Some people argue that the capacity to be conscious is unique to biological organisms, so that even superintelligent A.I. programs would be devoid of conscious experience. If this view is correct, then a relationship between a human being and a program like Samantha, however intelligent she might be, would be hopelessly one-sided. Moreover, few humans would want to join Samantha, for to upload your brain to a computer would be to forfeit your consciousness.

The entire article is here.

Tuesday, January 21, 2014

The Closing of the Scientific Mind

By David Gelernter
Commentary
Originally published January 1, 2014

Here is an excerpt:

Many scientists are proud of having booted man off his throne at the center of the universe and reduced him to just one more creature—an especially annoying one—in the great intergalactic zoo. That is their right. But when scientists use this locker-room braggadocio to belittle the human viewpoint, to belittle human life and values and virtues and civilization and moral, spiritual, and religious discoveries, which is all we human beings possess or ever will, they have outrun their own empiricism. They are abusing their cultural standing. Science has become an international bully.

Nowhere is its bullying more outrageous than in its assault on the phenomenon known as subjectivity.

Your subjective, conscious experience is just as real as the tree outside your window or the photons striking your retina—even though you alone feel it. Many philosophers and scientists today tend to dismiss the subjective and focus wholly on an objective, third-person reality—a reality that would be just the same if men had no minds. They treat subjective reality as a footnote, or they ignore it, or they announce that, actually, it doesn’t even exist.

The entire article is here.