Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Sunday, April 15, 2018

What’s Next for Humanity: Automation, New Morality and a ‘Global Useless Class’

Kimiko de Freytas-Tamura
The New York Times
Originally published March 19, 2018

What will our future look like — not in a century but in a mere two decades?

Terrifying, if you’re to believe Yuval Noah Harari, the Israeli historian and author of “Sapiens” and “Homo Deus,” a pair of audacious books that offer a sweeping history of humankind and a forecast of what lies ahead: an age of algorithms and technology that could see us transformed into “super-humans” with godlike qualities.

In an event organized by The New York Times and How To Academy, Mr. Harari gave his predictions to the Times columnist Thomas L. Friedman. Humans, he warned, “have created such a complicated world that we’re no longer able to make sense of what is happening.” Here are highlights of the interview.

Artificial intelligence and automation will create a ‘global useless class.’

Just as the Industrial Revolution created the working class, automation could create a “global useless class,” Mr. Harari said, and the political and social history of the coming decades will revolve around the hopes and fears of this new class. Disruptive technologies, which have helped bring enormous progress, could be disastrous if they get out of hand.

“Every technology has a good potential and a bad potential,” he said. “Nuclear war is obviously terrible. Nobody wants it. The question is how to prevent it. With disruptive technology the danger is far greater, because it has some wonderful potential. There are a lot of forces pushing us faster and faster to develop these disruptive technologies and it’s very difficult to know in advance what the consequences will be, in terms of community, in terms of relations with people, in terms of politics.”

The article is here.

The video is worth watching.

Please read Sapiens and Homo Deus by Yuval Harari.

Saturday, April 14, 2018

The AI Cargo Cult: The Myth of a Superhuman AI

Kevin Kelly
www.wired.com
Originally published April 25, 2017

Here is an excerpt:

The most common misconception about artificial intelligence begins with the common misconception about natural intelligence. This misconception is that intelligence is a single dimension. Most technical people tend to graph intelligence the way Nick Bostrom does in his book, Superintelligence — as a literal, single-dimension, linear graph of increasing amplitude. At one end is the low intelligence of, say, a small animal; at the other end is the high intelligence, of, say, a genius—almost as if intelligence were a sound level in decibels. Of course, it is then very easy to imagine the extension so that the loudness of intelligence continues to grow, eventually to exceed our own high intelligence and become a super-loud intelligence — a roar! — way beyond us, and maybe even off the chart.

This model is topologically equivalent to a ladder, so that each rung of intelligence is a step higher than the one before. Inferior animals are situated on lower rungs below us, while higher-level intelligence AIs will inevitably overstep us onto higher rungs. Time scales of when it happens are not important; what is important is the ranking—the metric of increasing intelligence.

The problem with this model is that it is mythical, like the ladder of evolution. The pre-Darwinian view of the natural world supposed a ladder of being, with inferior animals residing on rungs below human. Even post-Darwin, a very common notion is the “ladder” of evolution, with fish evolving into reptiles, then up a step into mammals, up into primates, into humans, each one a little more evolved (and of course smarter) than the one before it. So the ladder of intelligence parallels the ladder of existence. But both of these models supply a thoroughly unscientific view.

The information is here.

Friday, April 13, 2018

The Farmbots Are Coming

Matt Jancer
www.wired.com
Originally published March 9, 2018

The first fully autonomous ground vehicles hitting the market aren’t cars or delivery trucks—they’re ­robo­-farmhands. The Dot Power Platform is a prime example of an explosion in advanced agricultural technology, which Goldman Sachs predicts will raise crop yields 70 percent by 2050. But Dot isn’t just a tractor that can drive without a human for backup. It’s the Transformer of ag-bots, capable of performing 100-plus jobs, from hay baler and seeder to rock picker and manure spreader, via an ­arsenal of tool modules. And though the hulking machine can carry 40,000 pounds, it navigates fields with balletic precision.

The information is here.

Computer Says "No": Part 1- Algorithm Bias

Jasmine Leonard
www.thersa.org
Originally published March 14, 2018

From the court room to the workplace, important decisions are increasingly being made by so-called "automated decision systems". Critics claim that these decisions are less scrutable than those made by humans alone, but is this really the case? In the first of a three-part series, Jasmine Leonard considers the issue of algorithmic bias and how it might be avoided.

Recent advances in AI have a lot of people worried about the impact of automation.  One automatable task that’s received a lot of attention of late is decision-making.  So-called “automated decision systems” are already being used to decide whether or not individuals are given jobs, loans or even bail.  But there’s a lack of understanding about how these systems work, and as a result, a lot of unwarranted concerns.  In this three-part series I attempt to allay three of the most widely discussed fears surrounding automated decision systems: that they’re prone to bias, impossible to explain, and that they diminish accountability.

Before we begin, it’s important to be clear just what we’re talking about, as the term “automated decision” is incredibly misleading.  It suggests that a computer is making a decision, when in reality this is rarely the case.  What actually happens in most examples of “automated decisions” is that a human makes a decision based on information generated by a computer.  In the case of AI systems, the information generated is typically a prediction about the likelihood of something happening; for instance, the likelihood that a defendant will reoffend, or the likelihood that an individual will default on a loan.  A human will then use this prediction to make a decision about whether or not to grant a defendant bail or give an individual a credit card.  When described like this, it seems somewhat absurd to say that these systems are making decisions.  I therefore suggest that we call them what they actually are: prediction engines.

Thursday, April 12, 2018

CA’s Tax On Millionaires Yields Big Benefits For People With Mental Illness

Anna Gorman
Kaiser Health News
Originally published March 14, 2018

A statewide tax on the wealthy has significantly boosted mental health programs in California’s largest county, helping to reduce homelessness, incarceration and hospitalization, according to a report released Tuesday.

Revenue from the tax, the result of a statewide initiative passed in 2004, also expanded access to therapy and case management to almost 130,000 people up to age 25 in Los Angeles County, according to the report by the Rand Corp. Many were poor and from minority communities, the researchers said.

“Our results are encouraging about the impact these programs are having,” said Scott Ashwood, one of the authors and an associate policy researcher at Rand. “Overall we are seeing that these services are reaching a vulnerable population that needs them.”

The positive findings came just a few weeks after a critical state audit accused California counties of hoarding the mental health money — and the state of failing to ensure that the money was being spent. The February audit said that the California Department of Health Care Services allowed local mental health departments to accumulate $231 million in unspent funds by the end of the 2015-16 fiscal year — which should have been returned to the state because it was not spent in the allowed time frame.

Proposition 63, now known as the Mental Health Services Act, imposed a 1 percent tax on people who earn more than $1 million annually to pay for expanded mental health care in California. The measure raises about $2 billion each year for such services, such as preventing mental illness from progressing, reducing stigma and improving treatment. Altogether, counties have received $16.53 billion.

The information is here.

The Tech Industry’s War on Kids

Richard Freed
Medium.com
Originally published March 12, 2018

Here is an excerpt:

Fogg speaks openly of the ability to use smartphones and other digital devices to change our ideas and actions: “We can now create machines that can change what people think and what people do, and the machines can do that autonomously.” Called “the millionaire maker,” Fogg has groomed former students who have used his methods to develop technologies that now consume kids’ lives. As he recently touted on his personal website, “My students often do groundbreaking projects, and they continue having impact in the real world after they leave Stanford… For example, Instagram has influenced the behavior of over 800 million people. The co-founder was a student of mine.”

Intriguingly, there are signs that Fogg is feeling the heat from recent scrutiny of the use of digital devices to alter behavior. His boast about Instagram, which was present on his website as late as January of 2018, has been removed. Fogg’s website also has lately undergone a substantial makeover, as he now seems to go out of his way to suggest his work has benevolent aims, commenting, “I teach good people how behavior works so they can create products & services that benefit everyday people around the world.” Likewise, the Stanford Persuasive Technology Lab website optimistically claims, “Persuasive technologies can bring about positive changes in many domains, including health, business, safety, and education. We also believe that new advances in technology can help promote world peace in 30 years.”

While Fogg emphasizes persuasive design’s sunny future, he is quite indifferent to the disturbing reality now: that hidden influence techniques are being used by the tech industry to hook and exploit users for profit. His enthusiastic vision also conveniently neglects to include how this generation of children and teens, with their highly malleable minds, is being manipulated and hurt by forces unseen.

The article is here.

Wednesday, April 11, 2018

What to do with those divested billions? The only way is ethics

Juliette Jowit
The Guardian
Originally posted March 15, 2018

Here is an excerpt:

“I would not feel comfortable gaining from somebody else’s misery,” explains company owner and private investor Rebecca Hughes.

Institutions too are heading in the same direction: nearly 80% of investors across 30 countries told last year’s Schroders’ Global Investor Study that sustainability had become more important to them over the last five years.

“While profitability remains the central investment consideration, interest in sustainability is increasing,” said Jessica Ground, Schroders’ global head of stewardship. “But investors also see sustainability and profits as intertwined.”

UBS’s Doing well by doing good report claims more than half the UK public would pay more for goods or services with a conscience. Many more people will want better ethical standards, even if they don’t want or can’t afford to pay for them.

“It’s in my upbringing: you treat others in the way you’d like to be treated,” says Hughes.

More active financial investors are also taking the issues seriously. Several have indices to track the value of shares in companies which are not doing ‘bad’, or actively doing ‘good’. One is Morgan Stanley, whose two environmental, social and governance (ESG) indices – also covering weapons and women’s progress – were worth $62bn by last summer.

The information is here.

How One Bad Employee Can Corrupt a Whole Team

Stephen Dimmock and William C. Gerken
Harvard Business Review
Originally posted March 5, 2018

One bad apple, the saying goes, can ruin the bunch. So, too, with employees.

Our research on the contagiousness of employee fraud tells us that even your most honest employees become more likely to commit misconduct if they work alongside a dishonest individual. And while it would be nice to think that the honest employees would prompt the dishonest employees to better choices, that’s rarely the case.

Among co-workers, it appears easier to learn bad behavior than good.

For managers, it is important to realize that the costs of a problematic employee go beyond the direct effects of that employee’s actions — bad behaviors of one employee spill over into the behaviors of other employees through peer effects. By under-appreciating these spillover effects, a few malignant employees can infect an otherwise healthy corporate culture.

History — and current events — are littered with outbreaks of misconduct among co-workers: mortgage underwriters leading up to the financial crisis, stock brokers at boiler rooms such as Stratton Oakmont, and cross-selling by salespeople at Wells Fargo.

The information is here.

Tuesday, April 10, 2018

Robot sex and consent: Is consent to sex between a robot and a human conceivable, possible, and desirable?

Lily Frank and Sven Nyholm
Artificial Intelligence and Law
September 2017, Volume 25, Issue 3, pp 305–323

Abstract

The development of highly humanoid sex robots is on the technological horizon. If sex robots are integrated into the legal community as “electronic persons”, the issue of sexual consent arises, which is essential for legally and morally permissible sexual relations between human persons. This paper explores whether it is conceivable, possible, and desirable that humanoid robots should be designed such that they are capable of consenting to sex. We consider reasons for giving both “no” and “yes” answers to these three questions by examining the concept of consent in general, as well as critiques of its adequacy in the domain of sexual ethics; the relationship between consent and free will; and the relationship between consent and consciousness. Additionally we canvass the most influential existing literature on the ethics of sex with robots.

The article is here.