Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Algorithm. Show all posts
Showing posts with label Algorithm. Show all posts

Wednesday, March 7, 2018

The Squishy Ethics of Sex With Robots

Adam Rogers
Wired.com
Originally published February 2, 2018

Here is an excerpt:

Most of the world is ready to accept algorithm-enabled, internet-connected, virtual-reality-optimized sex machines with open arms (arms! I said arms!). The technology is evolving fast, which means two inbound waves of problems. Privacy and security, sure, but even solving those won’t answer two very hard questions: Can a robot consent to having sex with you? Can you consent to sex with it?

One thing that is unquestionable: There is a market. Either through licensing the teledildonics patent or risking lawsuits, several companies have tried to build sex technology that takes advantage of Bluetooth and the internet. “Remote connectivity allows people on opposite ends of the world to control each other’s dildo or sleeve device,” says Maxine Lynn, a patent attorney who writes the blog Unzipped: Sex, Tech, and the Law. “Then there’s also bidirectional control, which is going to be huge in the future. That’s when one sex toy controls the other sex toy and vice versa.”

Vibease, for example, makes a wearable that pulsates in time to synchronized digital books or a partner controlling an app. We-vibe makes vibrators that a partner can control, or set preset patterns. And so on.

The article is here.

Saturday, November 26, 2016

What is data ethics?

Luciano Floridi and Mariarosaria Taddeo
Philosophical Transactions Royal Society A

This theme issue has the founding ambition of landscaping data ethics as a new branch of ethics that studies and evaluates moral problems related to data (including generation, recording, curation, processing, dissemination, sharing and use), algorithms (including artificial intelligence, artificial agents, machine learning and robots) and corresponding practices (including responsible innovation, programming, hacking and professional codes), in order to formulate and support morally good solutions (e.g. right conducts or right values).  Data ethics builds on the foundation provided by computer and information ethics but, at the sametime, it refines the approach endorsed so far in this research field, by shifting the level of abstraction of ethical enquiries, from being information-centric to being data-centric. This shift brings into focus the different moral dimensions of all kinds of data, even data that never translate directly into information but can be used to support actions or generate behaviours, for example. It highlights the need for ethical analyses to concentrate on the content and nature of computational operations—the interactions among hardware, software and data—rather than on the variety of digital technologies that enable them. And it emphasizes the complexity of the ethical challenges posed by data science. Because of such complexity, data ethics should be developed from the start as a macroethics, that is, as an overall framework that avoids narrow, ad hoc approaches and addresses the ethical impact and implications of data science and its applications within a consistent, holistic and inclusive framework. Only as a macroethics will data ethics provide solutions that can maximize the value of data science for our societies, for all of us and for our environments.This article is part of the themed issue ‘The ethical impact of data science’.

The article is here.

Thursday, November 10, 2016

The Ethics of Algorithms: Mapping the Debate

Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S. and Floridi, L. 2016 (in press). ‘The Ethics of Algorithms: Mapping the Debate’. Big Data & Society

Abstract

In information societies, operations, decisions and choices previously left to humans are increasingly delegated to algorithms, which may advise, if not decide, about how data should be interpreted and what actions should be taken as a result. More and more often, algorithms mediate social processes, business transactions, governmental decisions, and how we perceive, understand, and interact among ourselves and with the environment. Gaps between the design and operation of algorithms and our understanding of their ethical implications can have severe consequences affecting individuals as well as groups and whole societies. This paper makes three contributions to clarify the ethical importance of algorithmic mediation. It provides a prescriptive map to organise the debate. It reviews the current discussion of ethical aspects of algorithms.And it assesses the available literature in order to identify areas requiring further work to develop the ethics of algorithms.

The book chapter is here.

Tuesday, September 20, 2016

Big data, Google and the end of free will

Yuval Noah Harari
Financial Times
Originally posted August August 26, 2016

Here are two excerpts:

This has already happened in the field of medicine. The most important medical decisions in your life are increasingly based not on your feelings of illness or wellness, or even on the informed predictions of your doctor — but on the calculations of computers who know you better than you know yourself. A recent example of this process is the case of the actress Angelina Jolie. In 2013, Jolie took a genetic test that proved she was carrying a dangerous mutation of the BRCA1 gene. According to statistical databases, women carrying this mutation have an 87 per cent probability of developing breast cancer. Although at the time Jolie did not have cancer, she decided to pre-empt the disease and undergo a double mastectomy. She didn’t feel ill but she wisely decided to listen to the computer algorithms. “You may not feel anything is wrong,” said the algorithms, “but there is a time bomb ticking in your DNA. Do something about it — now!”

(cut)

But even if Dataism is wrong about life, it may still conquer the world. Many previous creeds gained enormous popularity and power despite their factual mistakes. If Christianity and communism could do it, why not Dataism? Dataism has especially good prospects, because it is currently spreading across all scientific disciplines. A unified scientific paradigm may easily become an unassailable dogma.

The article is here.

Sunday, August 14, 2016

The Ethics of Artificial Intelligence in Intelligence Agencies

Cortney Weinbaum
The National Interest
Originally published July 18, 2016

Here is an excerpt:

Consider what could happen if the intelligence community creates a policy similar to the Pentagon directive and requires a human operator be allowed to intervene at any moment. One day the computer warns of an imminent attack, but the human analyst disagrees with the AI intelligence assessment. Does the CIA warn the president that an attack is about to occur? How is the human analyst’s assessment valued against the AI-generated intelligence?

 Or imagine that a highly sophisticated foreign country infiltrates the most sensitive U.S. intelligence systems, gains access to the algorithms and replaces the programming code with its own. The hacked AI system is no longer capable of providing accurate intelligence on that country.

The article is here.

Wednesday, July 13, 2016

In Wisconsin, a Backlash Against Using Data to Foretell Defendants’ Futures

By Mitch Smith
The New York Times
Originally published June 23, 2016

Here is an excerpt:

Compas is an algorithm developed by a private company, Northpointe Inc., that calculates the likelihood of someone committing another crime and suggests what kind of supervision a defendant should receive in prison. The results come from a survey of the defendant and information about his or her past conduct. Compas assessments are a data-driven complement to the written presentencing reports long compiled by law enforcement agencies.

Company officials say the algorithm’s results are backed by research, but they are tight-lipped about its details. They do acknowledge that men and women receive different assessments, as do juveniles, but the factors considered and the weight given to each are kept secret.

“The key to our product is the algorithms, and they’re proprietary,” said Jeffrey Harmon, Northpointe’s general manager. “We’ve created them, and we don’t release them because it’s certainly a core piece of our business. It’s not about looking at the algorithms. It’s about looking at the outcomes.”

The article is here.

Thursday, August 20, 2015

Algorithms and Bias: Q. and A. With Cynthia Dwork

By Claire Cane Miller
The New York Times - The Upshot
Originally posted August 10, 2015

Here is an excerpt:

Q: Some people have argued that algorithms eliminate discrimination because they make decisions based on data, free of human bias. Others say algorithms reflect and perpetuate human biases. What do you think?

A: Algorithms do not automatically eliminate bias. Suppose a university, with admission and rejection records dating back for decades and faced with growing numbers of applicants, decides to use a machine learning algorithm that, using the historical records, identifies candidates who are more likely to be admitted. Historical biases in the training data will be learned by the algorithm, and past discrimination will lead to future discrimination.

The entire article is here.

Friday, September 5, 2014

Here’s a Terrible Idea: Robot Cars With Adjustable Ethics Settings

By Patrick Lin
Wired
Originally posted August 18, 2014

Here is an excerpt:

So why not let the user select the car’s “ethics setting”? The way this would work is one customer may set the car (which he paid for) to jealously value his life over all others; another user may prefer that the car values all lives the same and minimizes harm overall; yet another may want to minimize legal liability and costs for herself; and other settings are possible.

Plus, with an adjustable ethics dial set by the customer, the manufacturer presumably can’t be blamed for hard judgment calls, especially in no-win scenarios, right? In one survey, 44 percent of the respondents preferred to have a personalized ethics setting, while only 12 percent thought the manufacturer should predetermine the ethical standard. So why not give customers what they want?

The entire story is here.

Sunday, June 1, 2014

The Ethics of Automated Cars

By Patrick Lin
Wired Magazine
Originally published May 6, 2014

Here is an except:

Programming a car to collide with any particular kind of object over another seems an awful lot like a targeting algorithm, similar to those for military weapons systems. And this takes the robot-car industry down legally and morally dangerous paths.

Even if the harm is unintended, some crash-optimization algorithms for robot cars would seem to require the deliberate and systematic discrimination of, say, large vehicles to collide into. The owners or operators of these targeted vehicles would bear this burden through no fault of their own, other than that they care about safety or need an SUV to transport a large family. Does that sound fair?

What seemed to be a sensible programming design, then, runs into ethical challenges. Volvo and other SUV owners may have a legitimate grievance against the manufacturer of robot cars that favor crashing into them over smaller cars, even if physics tells us this is for the best.

The entire story is here.

Tuesday, April 15, 2014

Automated ethics

When is it ethical to hand our decisions over to machines? And when is external automation a step too far?

by Tom Chatfield
Aeon Magazine
Originally published March 31, 2014

Here is an excerpt:

Automation, in this context, is a force pushing old principles towards breaking point. If I can build a car that will automatically avoid killing a bus full of children, albeit at great risk to its driver’s life, should any driver be given the option of disabling this setting? And why stop there: in a world that we can increasingly automate beyond our reaction times and instinctual reasoning, should we trust ourselves even to conduct an assessment in the first place?

Beyond the philosophical friction, this last question suggests another reason why many people find the trolley disturbing: because its consequentialist resolution presents not only the possibility that an ethically superior action might be calculable via algorithm (not in itself a controversial claim) but also that the right algorithm can itself be an ethically superior entity to us.

The entire article is here.

Thursday, July 18, 2013

When states monitored their citizens we used to call them authoritarian. Now we think this is what keeps us safe

By Susan Moore
The Guardian - Comments
Originally published July 3, 2013

Here is an excerpt:

What I failed to grasp, though, was quite how much I had already surrendered my liberty, not just personally but my political ideals about what liberty means. I simply took for granted that everyone can see everything and laughed at the idea that Obama will be looking at my pictures of a cat dressed as a lobster. I was resigned to the fact that some random FBI merchant will wonder at the inane and profane nature of my drunken tweets.

Slowly but surely, The Lives of Others have become ours. CCTV cameras everywhere watch us, so we no longer watch out for each other. Public space is controlled. Of course, much CCTV footage is never seen and often useless. But we don't need the panopticon once we have built one in our own minds. We are all suspects.

Or at least consumers. iTunes thinks I might like Bowie; Amazon thinks I want a compact tumble dryer. Really? Facebook seems to think I want to date men in uniform. I revel in the fact that the algorithms get it as wrong as the man who knocks on my door selling fish out of a van. "And not just fish," as he sometimes says mysteriously.

The entire comment is here.