Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Saturday, August 12, 2017

Reminder: the Trump International Hotel is still an ethics disaster

Carly Sitrin
Vox.com
Originally published August 8, 2017

The Trump International Hotel in Washington, DC, has been serving as a White House extension since Donald Trump took office, and experts think this violates several governmental ethics rules.

The Washington Post reported Monday that the Trump International Hotel has played host to countless foreign dignitaries, Republican lawmakers, and powerful actors hoping to hold court with Trump appointees or even the president himself.

Since visitation records at the Trump International Hotel are not made public, the Post sent reporters to the hotel every day in May to try to identify people and organizations using the facilities.

What they found was a revolving door of powerful people holding galas in the hotel’s lavish ballrooms and meeting over expensive cocktails with White House staff at the bar.

They included Rep. Dana Rohrabacher (R-CA), whom Politico recently called "Putin’s favorite congressman”; Rep. Bill Shuster (R-PA), who chairs the General Services Administration, the Trump hotel's landlord; and nine other Republican Congress members who all hosted events at the hotel, according to campaign spending disclosures obtained by the Post. Additionally, foreign visitors such as business groups promoting Turkish-American relations and the Romanian President Klaus Iohannis and his wife also rented out rooms.

The article is here.

Friday, August 11, 2017

What an artificial intelligence researcher fears about AI

Arend Hintze
TechXplore.com
Originally published July 14, 2017

Here is an excerpt:

Fear of the nightmare scenario

There is one last fear, embodied by HAL 9000, the Terminator and any number of other fictional superintelligences: If AI keeps improving until it surpasses human intelligence, will a superintelligence system (or more than one of them) find it no longer needs humans? How will we justify our existence in the face of a superintelligence that can do things humans could never do? Can we avoid being wiped off the face of the Earth by machines we helped create?

The key question in this scenario is: Why should a superintelligence keep us around?

I would argue that I am a good person who might have even helped to bring about the superintelligence itself. I would appeal to the compassion and empathy that the superintelligence has to keep me, a compassionate and empathetic person, alive. I would also argue that diversity has a value all in itself, and that the universe is so ridiculously large that humankind's existence in it probably doesn't matter at all.

But I do not speak for all humankind, and I find it hard to make a compelling argument for all of us. When I take a sharp look at us all together, there is a lot wrong: We hate each other. We wage war on each other. We do not distribute food, knowledge or medical aid equally. We pollute the planet. There are many good things in the world, but all the bad weakens our argument for being allowed to exist.

Fortunately, we need not justify our existence quite yet. We have some time – somewhere between 50 and 250 years, depending on how fast AI develops. As a species we can come together and come up with a good answer for why a superintelligence shouldn't just wipe us out. But that will be hard: Saying we embrace diversity and actually doing it are two different things – as are saying we want to save the planet and successfully doing so.

The article is here.

The real problem (of consciousness)

Anil K Seth
Aeon.com
Originally posted November 2, 2016

Here is an excerpt:

The classical view of perception is that the brain processes sensory information in a bottom-up or ‘outside-in’ direction: sensory signals enter through receptors (for example, the retina) and then progress deeper into the brain, with each stage recruiting increasingly sophisticated and abstract processing. In this view, the perceptual ‘heavy-lifting’ is done by these bottom-up connections. The Helmholtzian view inverts this framework, proposing that signals flowing into the brain from the outside world convey only prediction errors – the differences between what the brain expects and what it receives. Perceptual content is carried by perceptual predictions flowing in the opposite (top-down) direction, from deep inside the brain out towards the sensory surfaces. Perception involves the minimisation of prediction error simultaneously across many levels of processing within the brain’s sensory systems, by continuously updating the brain’s predictions. In this view, which is often called ‘predictive coding’ or ‘predictive processing’, perception is a controlled hallucination, in which the brain’s hypotheses are continually reined in by sensory signals arriving from the world and the body. ‘A fantasy that coincides with reality,’ as the psychologist Chris Frith eloquently put it in Making Up the Mind (2007).

Armed with this theory of perception, we can return to consciousness. Now, instead of asking which brain regions correlate with conscious (versus unconscious) perception, we can ask: which aspects of predictive perception go along with consciousness? A number of experiments are now indicating that consciousness depends more on perceptual predictions, than on prediction errors. In 2001, Alvaro Pascual-Leone and Vincent Walsh at Harvard Medical School asked people to report the perceived direction of movement of clouds of drifting dots (so-called ‘random dot kinematograms’). They used TMS to specifically interrupt top-down signalling across the visual cortex, and they found that this abolished conscious perception of the motion, even though bottom-up signals were left intact.

The article is here.

Thursday, August 10, 2017

Predatory Journals Hit By ‘Star Wars’ Sting

By Neuroskeptic
discovermagazine.com
Originally published July 19, 2017

A number of so-called scientific journals have accepted a Star Wars-themed spoof paper. The manuscript is an absurd mess of factual errors, plagiarism and movie quotes. I know because I wrote it.

Inspired by previous publishing “stings”, I wanted to test whether ‘predatory‘ journals would publish an obviously absurd paper. So I created a spoof manuscript about “midi-chlorians” – the fictional entities which live inside cells and give Jedi their powers in Star Wars. I filled it with other references to the galaxy far, far away, and submitted it to nine journals under the names of Dr Lucas McGeorge and Dr Annette Kin.

Four journals fell for the sting. The American Journal of Medical and Biological Research (SciEP) accepted the paper, but asked for a $360 fee, which I didn’t pay. Amazingly, three other journals not only accepted but actually published the spoof. Here’s the paper from the International Journal of Molecular Biology: Open Access (MedCrave), Austin Journal of Pharmacology and Therapeutics (Austin) and American Research Journal of Biosciences (ARJ) I hadn’t expected this, as all those journals charge publication fees, but I never paid them a penny.

The blog post is here.

Wednesday, August 9, 2017

Career of the Future: Robot Psychologist

Christopher Mims
The Wall Street Journal
Originally published July 9, 2017

Artificial-intelligence engineers have a problem: They often don’t know what their creations are thinking.

As artificial intelligence grows in complexity and prevalence, it also grows more powerful. AI already has factored into decisions about who goes to jail and who receives a loan. There are suggestions AI should determine who gets the best chance to live when a self-driving car faces an unavoidable crash.

Defining AI is slippery and growing more so, as startups slather the buzzword over whatever they are doing. It is generally accepted as any attempt to ape human intelligence and abilities.

One subset that has taken off is neural networks, systems that “learn” as humans do through training, turning experience into networks of simulated neurons. The result isn’t code, but an unreadable, tangled mass of millions—in some cases billions—of artificial neurons, which explains why those who create modern AIs can be befuddled as to how they solve tasks.

Most researchers agree the challenge of understanding AI is pressing. If we don’t know how an artificial mind works, how can we ascertain its biases or predict its mistakes?

We won’t know in advance if an AI is racist, or what unexpected thought patterns it might have that would make it crash an autonomous vehicle. We might not know about an AI’s biases until long after it has made countless decisions. It’s important to know when an AI will fail or behave unexpectedly—when it might tell us, “I’m sorry, Dave. I’m afraid I can’t do that.”

“A big problem is people treat AI or machine learning as being very neutral,” said Tracy Chou, a software engineer who worked with machine learning at Pinterest Inc. “And a lot of that is people not understanding that it’s humans who design these models and humans who choose the data they are trained on.”

The article is here.

Tuesday, August 8, 2017

The next big corporate trend? Actually having ethics.

Patrick Quinlan
Recode.net
Originally published July 20, 2017

Here is an excerpt:

Slowly, brands are waking up to the fact that strong ethics and core values are no longer a “nice to have,” but a necessity. Failure to take responsibility in times of crisis can take an irreparable toll on the trust companies have worked so hard to build with employees, partners and customers. So many brands are still getting it wrong, and the consequences are real — public boycotting, massive fines, fired CEOs and falling stock prices.

This shift is what I call ethical transformation — the application of ethics and values across all aspects of business and society. It’s as impactful and critical as digital transformation, the other megatrend of the last 20 years. You can’t have one without the other. The internet stripped away barriers between consumers and brands, meaning that transparency and attention to ethics and values is at an all-time high. Brands have to get on board, now. Consider some oft-cited casualties of the digital transformation: Blockbuster, Kodak and Sears. That same fate awaits companies that can’t or won’t prioritize ethics and values.

This is a good thing. Ethical transformation pushes us into a better future, one built on genuinely ethical companies. But it’s not easy. In fact, it’s pretty hard. And it takes time. For decades, most of the business world focused on what not to do or how not to get fined. (In a word: Compliance.) Every so often, ethics and its even murkier brother “values” got a little love as an afterthought. Brands that did focus on values and ethics were considered exceptions to the rule — the USAAs and Toms shoes of the world. No longer.

The article is here.

Monday, August 7, 2017

Study suggests why more skin in the game won't fix Medicaid

Don Sapatkin
Philly.com
Originally posted July 19, 2017

Here is an excerpt:

Previous studies have found that increasing cost-sharing causes consumers to skip medical care somewhat indiscriminately. The Dutch research was the first to examine the impact of cost-sharing changes on specialty mental health-care, the authors wrote.

Jalpa A. Doshi, a researcher at the University of Pennsylvania’s Leonard Davis Institute of Health Economics, has examined how Americans with commercial insurance respond to cost-sharing for antidepressants.

“Because Medicaid is the largest insurer of low-income individuals with serious mental illnesses such as schizophrenia and bipolar disorder in the United States, lawmakers should be cautious on whether an increase in cost sharing for such a vulnerable group may be a penny-wise, pound-foolish policy,” Doshi said in an email after reading the new study.

Michael Brody, president and CEO of Mental Health Partnerships, formerly the Mental Health Association of Southeastern Pennsylvania, had an even stronger reaction about the possible implications for Medicaid patients.

The article is here.

Attributing Agency to Automated Systems: Reflectionson Human–Robot Collaborations and Responsibility-Loci

Sven Nyholm
Science and Engineering Ethics
pp 1–19

Many ethicists writing about automated systems (e.g. self-driving cars and autonomous weapons systems) attribute agency to these systems. Not only that; they seemingly attribute an autonomous or independent form of agency to these machines. This leads some ethicists to worry about responsibility-gaps and retribution-gaps in cases where automated systems harm or kill human beings. In this paper, I consider what sorts of agency it makes sense to attribute to most current forms of automated systems, in particular automated cars and military robots. I argue that whereas it indeed makes sense to attribute different forms of fairly sophisticated agency to these machines, we ought not to regard them as acting on their own, independently of any human beings. Rather, the right way to understand the agency exercised by these machines is in terms of human–robot collaborations, where the humans involved initiate, supervise, and manage the agency of their robotic collaborators. This means, I argue, that there is much less room for justified worries about responsibility-gaps and retribution-gaps than many ethicists think.

The article is here.

Sunday, August 6, 2017

An erosion of ethics oversight should make us all more cynical about Trump

The Editorial Board
The Los Angeles Times
Originally published August 4, 2017

President Trump’s problems with ethics are manifest, from his refusal to make public his tax returns to the conflicts posed by his continued stake in the Trump Organization and its properties around the world — including the Trump International Hotel just down the street from the White House, in a building leased from the federal government he’s now in charge of. The president’s stubborn refusal to hew to the ethical norms set by his predecessors has left the nation to rightfully question whose best interests are foremost in his mind.

Some of the more persistent challenges to the Trump administration’s comportment have come from the Office of Government Ethics, whose recently departed director, Walter M. Shaub Jr., fought with the administration frequently over federal conflict-of-interest regulations. Under agency rules, chief of staff Shelley K. Finlayson should have been Shaub’s successor until the president nominated a new director, who would need Senate confirmation.

But Trump upended that transition last month by naming the office’s general counsel, David J. Apol, as the interim director. Apol has a reputation within the agency for taking contrarian — and usually more lenient — stances on ethics requirements than did Shaub and the consensus opinion of the staff (including Finlayson). And that, of course, raises the question of whether the White House replaced Finlayson with Apol in hopes of having a more conciliatory ethics chief without enduring a grueling nomination fight.

The article is here.