Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Monday, July 10, 2017

When Are Doctors Too Old to Practice?

By Lucette Lagnado
The Wall Street Journal
Originally posted June 24, 2017

Here is an excerpt:

Testing older physicians for mental and physical ability is growing more common. Nearly a fourth of physicians in America are 65 or older, and 40% of these are actively involved in patient care, according to the American Medical Association. Experts at the AMA have suggested that they be screened lest they pose a risk to patients. An AMA working group is considering guidelines.

Concern over older physicians' mental states--and whether it is safe for them to care for patients--has prompted a number of institutions, from Stanford Health Care in Palo Alto, Calif., to Driscoll Children's Hospital in Corpus Christi, Texas, to the University of Virginia Health System, to adopt age-related physician policies in recent years. The goal is to spot problems, in particular signs of cognitive decline or dementia.

Now, as more institutions like Cooper embrace the measures, they are roiling some older doctors and raising questions of fairness, scientific validity--and ageism.

"It is not for the faint of heart, this policy," said Ann Weinacker, 66, the former chief of staff at the hospital and professor of medicine at Stanford University who has overseen the controversial efforts to implement age-related screening at Stanford hospital.

A group of doctors has been battling Stanford's age-based physician policies for the past five years, contending they are demeaning and discriminatory. The older doctors got the medical staff to scrap a mental-competency exam aimed at testing for cognitive impairment. Most, like Frank Stockdale, an 81-year-old breast-cancer specialist, refused to take it.

The article is here.

Big Pharma gives your doctor gifts. Then your doctor gives you Big Pharma’s drugs

Nicole Van Groningen
The Washington Post
Originally posted June 13, 2017

Here is an excerpt:

The losers in this pharmaceutical industry-physician interaction are, of course, patients. The high costs of branded drugs are revenue to drug companies, but out-of-pocket expenses to health-care consumers. Almost a quarter of Americans who take prescription drugs report that they have difficulty affording their medications, and the high costs of these drugs is a leading reason that patients can’t adhere to them. Most branded drugs offer minimal — if any — benefit over generic formulations. And if doctors prescribe brand-name drugs that are prohibitively more expensive than generic options, patients might forgo the medications altogether — causing greater harm.

On a national scale, the financial burden imposed by branded drugs is enormous. Current estimates place our prescription drug spending at more than $400 billion annually, and branded drugs are almost entirely to blame: Though they constitute only 10 percent of prescriptions, they account for 72 percent of total drug spending. Even modest reductions in our use of branded prescription drugs — on par with the roughly 8 percent relative reduction seen in the JAMA study — could translate to billions of dollars in national health-care savings.

The article is here.

Sunday, July 9, 2017

Letter from the American Medical Association to McConnell and Schumer

James Madera
Letter from the American Medical Association
Sent June 26, 2017

To: Senators McConnell and Schumer

On behalf of the physician and medical student members of the American Medical Association
(AMA), I am writing to express our opposition to the discussion draft of the “Better Care
Reconciliation Act” released on June 22, 2017. Medicine has long operated under the precept of
Primum non nocere, or “first, do no harm.” The draft legislation violates that standard on many
levels.

In our January 3, 2017 letter to you, and in subsequent communications, we have consistently
urged that the Senate, in developing proposals to replace portions of the current law, pay special
attention to ensure that individuals currently covered do not lose access to affordable, quality
health insurance coverage. In addition, we have advocated for the sufficient funding of Medicaid
and other safety net programs and urged steps to promote stability in the individual market.
Though we await additional analysis of the proposal, it seems highly likely that a combination of
smaller subsidies resulting from lower benchmarks and the increased likelihood of waivers of
important protections such as required benefits, actuarial value standards, and out of pocket
spending limits will expose low and middle income patients to higher costs and greater difficulty
in affording care.

The AMA is particularly concerned with proposals to convert the Medicaid program into a
system that limits the federal obligation to care for needy patients to a predetermined formula
based on per-capita-caps.

The entire letter is here.

Saturday, July 8, 2017

The Ethics of CRISPR

Noah Robischon
Fast Company
Originally published on June 20, 2017

On the eve of publishing her new book, Jennifer Doudna, a pioneer in the field of CRISPR-Cas9 biology and genome engineering, spoke with Fast Company about the potential for this new technology to be used for good or evil.

“The worst thing that could happen would be for [CRISPR] technology to be speeding ahead in laboratories,” Doudna tells Fast Company. “Meanwhile, people are unaware of the impact that’s coming down the road.” That’s why Doudna and her colleagues have been raising awareness of the following issues.

DESIGNER HUMANS

Editing sperm cells or eggs—known as germline manipulation—would introduce inheritable genetic changes at inception. This could be used to eliminate genetic diseases, but it could also be a way to ensure that your offspring have blue eyes, say, and a high IQ. As a result, several scientific organizations and the National Institutes of Health have called for a moratorium on such experimentation. But, writes Doudna, “it’s almost certain that germline editing will eventually be safe enough to use in the clinic.”

The article is here.

Israeli education minister's ethics code would bar professors from expressing political opinions

Yarden Skop
Haaretz
Originally posted June 10, 2017

An ethics code devised at Education Minister Naftali Bennett's behest would bar professors from expressing political opinions, it emerged Friday.

The code, put together by Asa Kasher, an ethics and philosophy professor at Tel Aviv University, would also forbid staff from calling for an academic boycott of Israel.

Bennett had asked Kasher a few months ago to write a set of rules for appropriate political conduct at academic institutions. Kasher had written the Israel Defense Forces' ethics code.
The contents of the document, which were first reported by the Yedioth Ahronoth newspaper on Friday, will soon be submitted for the approval of the Council for Higher Education.

The article is here.

Friday, July 7, 2017

Federal ethics chief resigns after clashes with Trump

Lauren Rosenblatt
The Los Angeles Times
Originally posted July 6, 2017

Walter Shaub Jr., director of the U.S. Office of Government Ethics, announced Thursday he would resign, following a rocky relationship with President Trump and repeated confrontations with the administration.

Shaub, appointed by President Obama in 2013, had unsuccessfully pressed Trump to divest his business interests to avoid potential conflicts of interest, something Trump refused to do.

The ethics watchdog also engaged in a public battle with the White House over his demands for more information about former lobbyists and other appointees who had been granted waivers from ethics rules. After initially balking, the White House eventually released the requested information about the waivers.

Shaub called for a harsher punishment for presidential advisor Kellyanne Conway after she flouted ethics rules by publicly endorsing Ivanka Trump’s clothing line during a television appearance.

The article is here.

Is The Concern Artificial Intelligence — Or Autonomy?

Alva Noe
npr.org
Originally posted June 16, 2017

Here is an excerpt:

The big problem AI faces is not the intelligence part, really. It's the autonomy part. Finally, at the end of the day, even the smartest computers are tools, our tools — and their intentions are our intentions. Or, to the extent that we can speak of their intentions at all — for example of the intention of a self-driving car to avoid an obstacle — we have in mind something it was designed to do.

Even the most primitive organism, in contrast, at least seems to have a kind of autonomy. It really has its own interests. Light. Food. Survival. Life.

The danger of our growing dependence on technologies is not really that we are losing our natural autonomy in quite this sense. Our needs are still our needs. But it is a loss of autonomy, nonetheless. Even auto mechanics these days rely on diagnostic computers and, in the era of self-driving cars, will any of us still know how to drive? Think what would happen if we lost electricity, or if the grid were really and truly hacked? We'd be thrown back into the 19th century, as Dennett says. But in many ways, things would be worse. We'd be thrown back — but without the knowledge and know-how that made it possible for our ancestors to thrive in the olden days.

I don't think this fear is unrealistic. But we need to put it in context.

The article is here.

Thursday, July 6, 2017

The Torturers Speak

The Editorial Board
The New York Times
Originally posted June 23, 2017

It’s hard to watch the videotaped depositions of the two former military psychologists who, working as independent contractors, designed, oversaw and helped carry out the “enhanced interrogation” of detainees held at C.I.A. black sites in the months after the Sept. 11 terror attacks.

The men, Bruce Jessen and James Mitchell, strike a professional pose. Dressed in suits and ties, speaking matter-of-factly, they describe the barbaric acts they and others inflicted on the captives, who were swept up indiscriminately and then waterboarded, slammed into walls, locked in coffins and more — all in the hunt for intelligence that few, if any, of them possessed.

One died of apparent hypothermia.

Many others were ultimately released without charge.

When pushed to confront the horror and uselessness of what they had done, the psychologists fell back on one of the oldest justifications of wartime. “We were soldiers doing what we were instructed to do,” Dr. Jessen said.

Perhaps, but they were also soldiers whose contracting business was paid more than $81 million.

The information is here.

What the Rise of Sentient Robots Will Mean for Human Beings

George Musser
NBC
Originally posted June 19, 2017

Here is an excerpt:

“People expect that self-awareness is going to be this end game of artificial intelligence when really there are no scientific pursuits where you start at the end,” says Justin Hart, a computer scientist at the University of Texas. He and other researchers are already building machines with rudimentary minds. One robot wriggles like a newborn baby to understand its body. Another robot babbles about what it sees and cries when you hit it. Another sets off to explore its world on its own.

No one claims that robots have a rich inner experience — that they have pride in floors they've vacuumed or delight in the taste of 120-volt current. But robots can now exhibit some similar qualities to the human mind, including empathy, adaptability, and gumption.

Beyond it just being cool to create robots, researchers design these cybernetic creatures because they’re trying to fix flaws in machine-learning systems. Though these systems may be powerful, they are opaque. They work by relating input to output, like a test where you match items in column ‘A’ with items in column ‘B’. The AI systems basically memorize these associations. There’s no deeper logic behind the answers they give. And that’s a problem.

Humans can also be hard to read. We spend an inordinate amount of time analyzing ourselves and others, and arguably, that’s the main role of our conscious minds. If machines had minds, they might not be so inscrutable. We could simply ask them why they did what they did.

The article is here.