Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Monday, July 31, 2017

Truth or Punishment: Secrecy and Punishing the Self

Michael L. Slepian and Brock Bastian
Personality and Social Psychology Bulletin
First Published July 14, 2017, 1–17

Abstract

We live in a world that values justice; when a crime is committed, just punishment is expected to follow. Keeping one’s misdeed secret therefore appears to be a strategic way to avoid (just) consequences. Yet, people may engage in self-punishment to right their own wrongs to balance their personal sense of justice. Thus, those who seek an escape from justice by keeping secrets may in fact end up serving that same justice on themselves (through self-punishment). Six studies demonstrate that thinking about secret (vs. confessed) misdeeds leads to increased self-punishment (increased denial of pleasure and seeking of pain). These effects were mediated by the feeling one deserved to be punished, moderated by the significance of the secret, and were observed for both self-reported and behavioral measures of self-punishment.

Here is an excerpt:

Recent work suggests, however, that people who are reminded of their own misdeeds will sometimes seek out their own justice. That is, even subtle acts of self-punishment can restore a sense of personal justice, whereby a wrong feels to have been righted (Bastian et al., 2011; Inbar et al., 2013). Thus,
we predicted that even though keeping a misdeed secret could lead one to avoid being punished by others, it still could prompt a desire for punishment all the same, one inflicted by the self.

The article is here.

Note: There are significant implications in this article for psychotherapists.

Is it dangerous to recreate flawed human morality in machines?

Alexandra Myers-Lewis
Wired.com
Originally published July 13, 2017

Here are two excerpts:

The need for ethical machines may be one of the defining issues of our time. Algorithms are created to govern critical systems in our society, from banking to medicine, but with no concept of right and wrong, machines cannot understand the repercussions of their actions. A machine has never thrown a punch in a schoolyard fight, cheated on a test or a relationship, or been rapt with the special kind of self-doubt that funds our cosmetic and pharmaceutical industries. Simply put, an ethical machine will always be an it - but how can it be more?

(cut)

A self-driving car wouldn't just have to make decisions in life-and-death situations - as if that wasn't enough - but would also need to judge how much risk is acceptable at any given time. But who will ultimately restrict this decision-making process? Would it be the job of the engineer to determine which circumstances it is acceptable to overtake a cyclist? You won't lose sleep pegging a deer over a goat. But a person? Choosing who potentially lives and dies based on a number has an inescapable air of dystopia. You may see tight street corners and hear the groan of oncoming traffic, but an algorithm will only see the world in numbers. These numbers will form its memories and its reason, the force that moves the car out into the road.

"I think people will be very uncomfortable with the idea of a machine deciding between life and death," Sütfeld says, "In this regard we believe that transparency and comprehensibility could be a very important factor to gain public acceptance of these systems. Or put another way, people may favour a transparent and comprehensible system over a more complex black-box system. We would hope that the people will understand this general necessity of a moral compass and that the discussion will be about what approach to take, and how such systems should decide. If this is put in, every car will make the same decision and if there is a good common ground in terms of model, this could improve public safety."

The article is here.

Sunday, July 30, 2017

Should we be afraid of AI?

Luciano Floridi
aeon
Originally published

Here is an excerpt:

True AI is not logically impossible, but it is utterly implausible. We have no idea how we might begin to engineer it, not least because we have very little understanding of how our own brains and intelligence work. This means that we should not lose sleep over the possible appearance of some ultraintelligence. What really matters is that the increasing presence of ever-smarter technologies is having huge effects on how we conceive of ourselves, the world, and our interactions. The point is not that our machines are conscious, or intelligent, or able to know something as we do. They are not. There are plenty of well-known results that indicate the limits of computation, so-called undecidable problems for which it can be proved that it is impossible to construct an algorithm that always leads to a correct yes-or-no answer.

We know, for example, that our computational machines satisfy the Curry-Howard correspondence, which indicates that proof systems in logic on the one hand and the models of computation on the other, are in fact structurally the same kind of objects, and so any logical limit applies to computers as well. Plenty of machines can do amazing things, including playing checkers, chess and Go and the quiz show Jeopardy better than us. And yet they are all versions of a Turing Machine, an abstract model that sets the limits of what can be done by a computer through its mathematical logic.

Quantum computers are constrained by the same limits, the limits of what can be computed (so-called computable functions). No conscious, intelligent entity is going to emerge from a Turing Machine. The point is that our smart technologies – also thanks to the enormous amount of available data and some very sophisticated programming – are increasingly able to deal with more tasks better than we do, including predicting our behaviours. So we are not the only agents able to perform tasks successfully.

Engineering Eden: The quest for eternal life

Kristin Kostick
Baylor College of Medicine
Originally posted June 2,2017

If you’re like most people, you may associate the phrase “eternal life” with religion: The promise that we can live forever if we just believe in God. You probably don’t associate the phrase with an image of scientists working in a lab, peering at worms through microscopes or mice skittering through boxes. But you should.

The quest for eternal life has only recently begun to step out from behind the pews and into the petri dish.

I recently discussed the increasing feasibility of the transhumanist vision due to continuing advancements in biotech, gene- and cell-therapies. These emerging technologies, however, don’t erase the fact that religion – not science – has always been our salve for confronting death’s inevitability. For believers, religion provides an enduring mechanism (belief and virtue) behind the perpetuity of existence, and shushes our otherwise frantic inability to grasp: How can I, as a person, just end?

The Mormon transhumanist Lincoln Cannon argues that science, rather than religion, offers a tangible solution to this most basic existential dilemma. He points out that it is no longer tenable to believe in eternal life as only available in heaven, requiring the death of our earthly bodies before becoming eternal, celestial beings.

Would a rational person choose to believe in an uncertain, spiritual afterlife over the tangible persistence of one’s own familiar body and the comforting security of relationships we’ve fostered over a lifetime of meaningful interactions?

The article is here.

Saturday, July 29, 2017

On ethics, Trump is leading America in the wrong direction

Jeffrey D. Sachs
CNN.com
Originally published July 26, 2017

Here is an excerpt:

So here we are. Bribes are no longer bribes, campaign funds from corporations are free speech, and the politicians are just being good public servants when they accept money from those who seek their favor. Crooked politicians are thrilled; the rest of us look on shocked at the pageantry of cynicism and immorality. Senior officials in law-abiding countries have told me they can hardly believe their eyes as to what is underway in the United States.

Which brings us to Donald Trump. Trump seems to know no limits whatsoever in his commingling of the public interest and his personal business interests. He failed to give up his ownership interest in his businesses upon taking office. (Trump resigned from positions in his companies and said his two sons are in charge.)

Government and Republican Party activities have been booked into Trump properties. Trump campaign funds are used to hire lawyers to defend Donald Trump Jr. in the Russia probe. Campaign associates such as Paul Manafort and Michael Flynn have been under scrutiny for their business dealings with clients tied to foreign governments.

In response to the stench, the former head of the government ethics office recently resigned, declaring that the United States is "pretty close to a laughingstock at this point." The resignation was not remarkable under the circumstances. What was remarkable is that most Republicans politicians remain mum to these abuses. Of course too many politicians of both parties are deeply compromised by financial dependence on corporate campaign donors.

The article is here.

Trump Has Plunged Nation Into ‘Ethics Crisis,’ Ex-Watchdog Says

Britain Eakin
Courthouse News Service
Originally published July 28, 2017

The government’s former top ethics chief sounded the alarm Friday, saying the first eight months of the Trump administration have been “an absolute shock to the system” that has plunged the nation into “an ethics crisis.”

Walter Shaub Jr. resigned July 6 after months of clashes with the White House over issues such as President Trump’s refusal to divest his businesses and the administration’s delay in disclosing ethics waivers for appointees.

As he left office he told NPR that “the current situation has made it clear that the ethics program needs to be stronger than it is.”

He did not elaborate at that time on what about the “situation” so troubled him, but he said at the Campaign Legal Center, he would have more freedom “to push for reform” while broadening his focus to ethics issues at all levels of government.

During a talk at the National Press Club Friday morning, Shaub said the president and other administration officials have departed from ethical principles and norms as part of a broader assault on the American representative form of government.

Shaub said he is “extremely concerned” by this.

“The biggest concern is that norms evolve. So if we have a shock to the system, what we’re experiencing now could become the new norm,” Shaub said.

The article is here.

Ethics and Governance AI Fund funnels $7.6M to Harvard, MIT and independent research efforts

Devin Coldewey
Tech Crunch
Originally posted July 11, 2017

A $27 million fund aimed at applying artificial intelligence to the public interest has announced the first targets for its beneficence: $7.6 million will be split unequally between MIT’s Media Lab, Harvard’s Berkman Klein Center and seven smaller research efforts around the world.

The Ethics and Governance of Artificial Intelligence Fund was created by Reid Hoffman, Pierre Omidyar and the Knight Foundation back in January; the intention was to ensure that “social scientists, ethicists, philosophers, faith leaders, economists, lawyers and policymakers” have a say in how AI is developed and deployed.

To that end, this first round of fundings supports existing organizations working along those lines, as well as nurturing some newer ones.

The lion’s share of this initial round, $5.9 million, will be split by MIT and Harvard, as the initial announcement indicated. Media Lab is, of course, on the cutting edge of many research efforts in AI and elsewhere; Berkman Klein focuses more on the legal and analysis side of things.

The fund’s focuses are threefold:

  • Media and information quality – looking at how to understand and control the effects of autonomous information systems and “influential algorithms” like Facebook’s news feed.
  • Social and criminal justice – perhaps the area where the bad influence of AI-type systems could be the most insidious; biases in data and interpretation could be baked into investigative and legal systems, giving them the illusion of objectivity. (Obviously the fund seeks to avoid this.)
  • Autonomous cars – although this may seem incongruous with the others, self-driving cars represent an immense social opportunity. Mobility is one of the most influential social-economic factors, and its reinvention offers a chance to improve the condition of nearly everyone on the planet — great potential for both advancement and abuse.

Friday, July 28, 2017

You are fair, but I expect you to also behave unfairly

Positive asymmetry in trait-behavior relations for moderate morality information

Patrice Rusconi, Simona Sacchi, Roberta Capellini, Marco Brambilla, Paolo Cherubini
PLOS One
Published: July 11, 2017

Summary: People who are believed to be immoral are unable to reverse individuals' perception of them, potentially resulting in difficulties in the workplace and barriers in accessing fair and equal treatment in the legal system.

Abstract

Trait inference in person perception is based on observers’ implicit assumptions about the relations between trait adjectives (e.g., fair) and the either consistent or inconsistent behaviors (e.g., having double standards) that an actor can manifest. This article presents new empirical data and theoretical interpretations on people’ behavioral expectations, that is, people’s perceived trait-behavior relations along the morality (versus competence) dimension. We specifically address the issue of the moderate levels of both traits and behaviors almost neglected by prior research by using a measure of the perceived general frequency of behaviors. A preliminary study identifies a set of competence- and morality-related traits and a subset of traits balanced for valence. Studies 1–2 show that moral target persons are associated with greater behavioral flexibility than immoral ones where abstract categories of behaviors are concerned. For example, participants judge it more likely that a fair person would behave unfairly than an unfair person would behave fairly. Study 3 replicates the results of the first 2 studies using concrete categories of behaviors (e.g., telling the truth/omitting some information). Study 4 shows that the positive asymmetry in morality-related trait-behavior relations holds for both North-American and European (i.e., Italian) individuals. A small-scale meta-analysis confirms the existence of a positive asymmetry in trait-behavior relations along both morality and competence dimensions for moderate levels of both traits and behaviors. We discuss these findings in relation to prior models and results on trait-behavior relations and we advance a motivational explanation based on self-protection.

The article is here.

Note: This research also applies to perceptions in psychotherapy and in family relationships.

I attend, therefore I am

Carolyn Dicey Jennings
Aeon.com
Originally published July 10, 2017

Here is an excerpt:

Following such considerations, the philosopher Daniel Dennett proposed that the self is simply a ‘centre of narrative gravity’ – just as the centre of gravity in a physical object is not a part of that object, but a useful concept we use to understand the relationship between that object and its environment, the centre of narrative gravity in us is not a part of our bodies, a soul inside of us, but a useful concept we use to make sense of the relationship between our bodies, complete with their own goals and intentions, and our environment. So, you, you, are a construct, albeit a useful one. Or so goes Dennett’s thinking on the self.

And it isn’t just Dennett. The idea that there is a substantive self is passé. When cognitive scientists aim to provide an empirical account of the self, it is simply an account of our sense of self – why it is that we think we have a self. What we don’t find is an account of a self with independent powers, responsible for directing attention and resolving conflicts of will.

There are many reasons for this. One is that many scientists think that the evidence counts in favour of our experience in general being epiphenomenal – something that does not influence our brain, but is influenced by it. In this view, when you experience making a tough decision, for instance, that decision was already made by your brain, and your experience is mere shadow of that decision. So for the very situations in which we might think the self is most active – in resolving difficult decisions – everything is in fact already achieved by the brain.

The article is here.

Thursday, July 27, 2017

First Human Embryos Edited in U.S.

Steve Connor
MIT Technology News
Originally published July 26, 2017

The first known attempt at creating genetically modified human embryos in the United States has been carried out by a team of researchers in Portland, Oregon, Technology Review has learned.

The effort, led by Shoukhrat Mitalipov of Oregon Health and Science University, involved changing the DNA of a large number of one-cell embryos with the gene-editing technique CRISPR, according to people familiar with the scientific results.

Until now, American scientists have watched with a combination of awe, envy, and some alarm as scientists elsewhere were first to explore the controversial practice. To date, three previous reports of editing human embryos were all published by scientists in China.

Now Mitalipov is believed to have broken new ground both in the number of embryos experimented upon and by demonstrating that it is possible to safely and efficiently correct defective genes that cause inherited diseases.

Although none of the embryos were allowed to develop for more than a few days—and there was never any intention of implanting them into a womb—the experiments are a milestone on what may prove to be an inevitable journey toward the birth of the first genetically modified humans.

The article is here.

Psychiatry Group Tells Members They Can Ignore ‘Goldwater Rule’ and Comment on Trump’s Mental Health

Sharon Begley
Global Research
Originally published July 25, 2017

A leading psychiatry group has told its members they should not feel bound by a longstanding rule against commenting publicly on the mental state of public figures — even the president.

The statement, an email this month from the executive committee of the American Psychoanalytic Association to its 3,500 members, represents the first significant crack in the profession’s decades-old united front aimed at preventing experts from discussing the psychiatric aspects of politicians’ behavior. It will likely make many of its members feel more comfortable speaking openly about President Trump’s mental health.

The impetus for the email was “belief in the value of psychoanalytic knowledge in explaining human behavior,” said psychoanalytic association past president Dr. Prudence Gourguechon, a psychiatrist in Chicago.

“We don’t want to prohibit our members from using their knowledge responsibly.”

That responsibility is especially great today, she told STAT, “since Trump’s behavior is so different from anything we’ve seen before” in a commander in chief.

An increasing number of psychologists and psychiatrists have denounced the restriction as a “gag rule” and flouted it, with some arguing they have a “duty to warn” the public about what they see as Trump’s narcissism, impulsivity, poor attention span, paranoia, and other traits that, they believe, impair his ability to lead.

The article is here.

Wednesday, July 26, 2017

Everybody lies: how Google search reveals our darkest secrets

Seth Stephens-Davidowitz
The Guardian
Originally published July 9, 2017

Everybody lies. People lie about how many drinks they had on the way home. They lie about how often they go to the gym, how much those new shoes cost, whether they read that book. They call in sick when they’re not. They say they’ll be in touch when they won’t. They say it’s not about you when it is. They say they love you when they don’t. They say they’re happy while in the dumps. They say they like women when they really like men. People lie to friends. They lie to bosses. They lie to kids. They lie to parents. They lie to doctors. They lie to husbands. They lie to wives. They lie to themselves. And they damn sure lie to surveys. Here’s my brief survey for you:

Have you ever cheated in an exam?

Have you ever fantasised about killing someone?

Were you tempted to lie?

Many people underreport embarrassing behaviours and thoughts on surveys. They want to look good, even though most surveys are anonymous. This is called social desirability bias. An important paper in 1950 provided powerful evidence of how surveys can fall victim to such bias. Researchers collected data, from official sources, on the residents of Denver: what percentage of them voted, gave to charity, and owned a library card. They then surveyed the residents to see if the percentages would match. The results were, at the time, shocking. What the residents reported to the surveys was very different from the data the researchers had gathered. Even though nobody gave their names, people, in large numbers, exaggerated their voter registration status, voting behaviour, and charitable giving.

The article is here.

Using Virtual Reality to Assess Ethical Decisions in Road Traffic Scenarios

Leon R. Sütfeld, Richard Gast, Peter König and Gordon Pipa
Front. Behav. Neurosci., 05 July 2017

Self-driving cars are posing a new challenge to our ethics. By using algorithms to make decisions in situations where harming humans is possible, probable, or even unavoidable, a self-driving car's ethical behavior comes pre-defined. Ad hoc decisions are made in milliseconds, but can be based on extensive research and debates. The same algorithms are also likely to be used in millions of cars at a time, increasing the impact of any inherent biases, and increasing the importance of getting it right. Previous research has shown that moral judgment and behavior are highly context-dependent, and comprehensive and nuanced models of the underlying cognitive processes are out of reach to date. Models of ethics for self-driving cars should thus aim to match human decisions made in the same context. We employed immersive virtual reality to assess ethical behavior in simulated road traffic scenarios, and used the collected data to train and evaluate a range of decision models. In the study, participants controlled a virtual car and had to choose which of two given obstacles they would sacrifice in order to spare the other. We randomly sampled obstacles from a variety of inanimate objects, animals and humans. Our model comparison shows that simple models based on one-dimensional value-of-life scales are suited to describe human ethical behavior in these situations. Furthermore, we examined the influence of severe time pressure on the decision-making process. We found that it decreases consistency in the decision patterns, thus providing an argument for algorithmic decision-making in road traffic. This study demonstrates the suitability of virtual reality for the assessment of ethical behavior in humans, delivering consistent results across subjects, while closely matching the experimental settings to the real world scenarios in question.

The article is here.

Tuesday, July 25, 2017

Should a rapist get Viagra or a robber get a cataracts op?

Tom Douglas
Aeon Magazine
Originally published on July 7, 2017

Suppose a physician is about to treat a patient for diminished sex drive when she discovers that the patient – let’s call him Abe – has raped several women in the past. Fearing that boosting his sex drive might lead Abe to commit further sex offences, she declines to offer the treatment. Refusal to provide medical treatment in this case strikes many as reasonable. It might not be entirely unproblematic, since some will argue that he has a human right to medical treatment, but many of us would probably think the physician is within her rights – she’s not obliged to treat Abe. At least, not if her fears about further offending are well-founded.

But now consider a different case. Suppose an eye surgeon is about to book Bert in for a cataract operation when she discovers that he is a serial bank robber. Fearing that treating his developing blindness might help Bert to carry off further heists, she declines to offer the operation. In many ways, this case mirrors that of Abe. But morally, it seems different. In this case, refusing treatment does not seem reasonable, no matter how well-founded the surgeon’s fear. What’s puzzling is why. Why is Bert’s surgeon obliged to treat his blindness, while Abe’s physician has no similar obligation to boost his libido?

Here’s an initial suggestion: diminished libido, it might be said, is not a ‘real disease’. An inconvenience, certainly. A disability, perhaps. But a genuine pathology? No. By contrast, cataract disease clearly is a true pathology. So – the argument might go – Bert has a stronger claim to treatment than Abe. But even if reduced libido is not itself a disease – a view that could be contested – it could have pathological origins. Suppose Abe has a disease that suppresses testosterone production, and thus libido. And suppose that the physician’s treatment would restore his libido by correcting this disease. Still, it would seem reasonable for her to refuse the treatment, if she had good grounds to believe providing it could result in further sex offences.

A new breed of scientist, with brains of silicon

John Bohannon
Science Magazine
Originally published July 5, 2017

Here is an excerpt:

But here’s the key difference: When the robots do finally discover the genetic changes that boost chemical output, they don’t have a clue about the biochemistry behind their effects.

Is it really science, then, if the experiments don’t deepen our understanding of how biology works? To Kimball, that philosophical point may not matter. “We get paid because it works, not because we understand why.”

So far, Hoffman says, Zymergen’s robotic lab has boosted the efficiency of chemical-producing microbes by more than 10%. That increase may not sound like much, but in the $160-billion-per-year sector of the chemical industry that relies on microbial fermentation, a fractional improvement could translate to more money than the entire $7 billion annual budget of the National Science Foundation. And the advantageous genetic changes that the robots find represent real discoveries, ones that human scientists probably wouldn’t have identified. Most of the output-boosting genes are not directly related to synthesizing the desired chemical, for instance, and half have no known function. “I’ve seen this pattern now in several different microbes,” Dean says. Finding the right genetic combinations without machine learning would be like trying to crack a safe with thousands of numbers on its dial. “Our intuitions are easily overwhelmed by the complexity,” he says.

The article is here.

Monday, July 24, 2017

GOP Lawmakers Buy Health Insurance Stocks as Repeal Efforts Move Forward

Lee Fang
The Intercept
Originally posted July 6, 2017

Here is an excerpt:

The issue of insider political trading, with members and staff buying and selling stock using privileged information, has continued to plague Congress. It gained national prominence during the confirmation hearings for Health and Human Services Secretary Tom Price, when it was revealed that the Georgia Republican had bought shares in Innate Immunotherapeutics, a relatively obscure Australian biotechnology firm, while legislating on policies that could have impacted the firm’s performance.

The stock advice had been passed to Price from Rep. Chris Collins, R-N.Y., a board member for Innate Immunotherapeutics, and was shared with a number of other GOP lawmakers, who also invested in the firm. Conaway, records show, bought shares in the company a week after Price.

Conaway, who serves as a GOP deputy whip in the House, has a long record of investing in firms that coincide with his official duties. Politico reported that Conaway’s wife purchased stock in a nuclear firm just after Conaway sponsored a bill to deal with nuclear waste storage in his district. The firm stood to directly benefit from the legislation.

Some of the biggest controversies stem from the revelation that during the 2008 financial crisis, multiple lawmakers from both parties rearranged their financial portfolios to avoid heavy losses. In one case, former Rep. Spencer Baucus, R-Ala., used confidential meetings about the unfolding bank crisis to make special trades designed to increase in value as the stock market plummeted.

The article is here.

Even the Insured Often Can't Afford Their Medical Bills

Helaine Olen
The Atlantic
Originally published June 18, 2017

Here is an excerpt:

The current debate over the future of the Affordable Care Act is obscuring a more pedestrian reality. Just because a person is insured, it doesn’t mean he or she can actually afford their doctor, hospital, pharmaceutical, and other medical bills. The point of insurance is to protect patients’ finances from the costs of everything from hospitalizations to prescription drugs, but out-of-pocket spending for people even with employer-provided health insurance has increased by more than 50 percent since 2010, according to human resources consultant Aon Hewitt. The Kaiser Family Foundation reports that in 2016, half of all insurance policy-holders faced a deductible, the amount people need to pay on their own before their insurance kicks in, of at least $1,000. For people who buy their insurance via one of the Affordable Care Act’s exchanges, that figure will be higher still: Almost 90 percent have deductibles of $1,300 for an individual or $2,600 for a family.

Even a gold-plated insurance plan with a low deductible and generous reimbursements often has its holes. Many people have separate—and often hard-to-understand—in-network and out-of-network deductibles, or lack out-of-network coverage altogether.  Expensive pharmaceuticals are increasingly likely to require a significantly higher co-pay or not be covered at all. While many plans cap out-of-pocket spending, that cap can often be quite high—in 2017, it’s $14,300 for a family plan purchased on the ACA exchanges, for example. Depending on the plan, medical care received from a provider not participating in a particular insurer’s network might not count toward any deductible or cap at all.

The article is here.

Sunday, July 23, 2017

Stop Obsessing Over Race and IQ

John McWhorter
The National Review
Originally published July 5, 2017

Here are three excerpts:

Suppose that, at the end of the day, people of African descent have lower IQs on average than do other groups of humans, and that this gap is caused, at least in part, by genetic differences.

(cut)

There is, however, a question that those claiming black people are genetically predisposed to have lower IQs than others fail to answer: What, precisely, would we gain from discussing this particular issue?

(cut)

A second purpose of being “honest” about a racial IQ gap would be the opposite of the first: We might take the gap as a reason for giving not less but more attention to redressing race-based inequities. That is, could we imagine an America in which it was accepted that black people labored — on average, of course — under an intellectual handicap, and an enlightened, compassionate society responded with a Great Society–style commitment to the uplift of the people thus burdened?

I am unaware of any scholar or thinker who has made this argument, perhaps because it, too, is an obvious fantasy. Officially designating black people as a “special needs” race perpetually requiring compensatory assistance on the basis of their intellectual inferiority would run up against the same implacable resistance as condemning them to menial roles for the same reason. The impulse that rejects the very notion of IQ differences between races will thrive despite any beneficent intentions founded on belief in such differences.

The article is here.

Saturday, July 22, 2017

Mapping Cognitive Structure onto the Landscape of Philosophical Debate

An Empirical Framework with Relevance to Problems of Consciousness, Free will and Ethics

Jared P. Friedman & Anthony I. Jack
Review of Philosophy and Psychology
pp 1–41

Abstract

There has been considerable debate in the literature as to whether work in experimental philosophy (X-Phi) actually makes any significant contribution to philosophy. One stated view is that many X-Phi projects, notwithstanding their focus on topics relevant to philosophy, contribute little to philosophical thought. Instead, it has been claimed the contribution they make appears to be to cognitive science. In contrast to this view, here we argue that at least one approach to X-Phi makes a contribution which parallels, and also extends, historically salient forms of philosophical analysis, especially contributions from Immanuel Kant, William James, Peter F. Strawson and Thomas Nagel. The framework elaborated here synthesizes philosophical theory with empirical evidence from psychology and neuroscience and applies it to three perennial philosophical problems. According to this account, the origin of these three problems can be illuminated by viewing them as arising from a tension between two distinct types of cognition, each of which is associated with anatomically independent and functionally inhibitory neural networks. If the parallel we draw, between an empirical project and historically highly influential examples of philosophical analysis, is viewed as convincing, it follows that work in the cognitive sciences can contribute directly to philosophy. Further, this conclusion holds whether the empirical details of the account are correct or not.

The article is here.

Friday, July 21, 2017

Judgment Before Emotion: People Access Moral Evaluations Faster than Affective States

Corey Cusimano, Stuti Thapa Magar, & Bertram F. Malle

Abstract

Theories about the role of emotions in moral cognition make different predictions about the relative speed of moral and affective judgments: those that argue that felt emotions are causal inputs to moral judgments predict that recognition of affective states should precede moral judgments; theories that posit emotional states as the output of moral judgment predict the opposite. Across four studies, using a speeded reaction time task, we found that self-reports of felt emotion were delayed relative to reports of event-directed moral judgments (e.g. badness) and were no faster than person directed moral judgments (e.g. blame). These results pose a challenge to prominent theories arguing that moral judgments are made on the basis of reflecting on affective states.

The article is here.

Enabling torture: APA, clinical psychology training and the failure to disobey.

Alice LoCicero, Robert P. Marlin, David Jull-Patterson, Nancy M. Sweeney, Brandon Lee Gray, & J. Wesley Boyd
Peace and Conflict: Journal of Peace Psychology, Vol 22(4), Nov 2016, 345-355.

Abstract

The American Psychological Association (APA) has historically had close ties with the U.S. Department of Defense (DOD). Recent revelations describe problematic outcomes of those ties, as some in the APA colluded with the DOD to allow psychologists to participate, with expectation of impunity, in harsh interrogations that amounted to torture of Guantanamo detainees, during the Bush era. We now know that leaders in the APA purposely misled psychologists about the establishment of policies on psychologists’ roles in interrogations. Still, the authors wondered why, when the resulting policies reflected a clear contradiction of the fundamental duty to do no harm, few psychologists, in or out of the military, protested the policies articulated in 2005 by the committee on Psychological Ethics and National Security (PENS). Previous research suggested that U.S. graduate students in clinical psychology receive little or no training in the duties of psychologists in military settings or in the ethical guidance offered by international treaties. Thus psychologists might not have been well prepared to critique the PENS policies or to refuse to participate in interrogations. To further explore this issue, the authors surveyed Directors of Clinical Training of doctoral programs in clinical psychology, asking how extensively their programs address dilemmas psychologists may face in military settings. The results indicate that most graduate programs offer little attention to dilemmas of unethical orders, violations of international conventions, or excessively harsh interrogations. These findings, combined with earlier studies, suggest that military psychologists may have been unprepared to address ethical dilemmas, whereas psychologists outside the military may have been unprepared to critique the APA’s collusion with the DOD. The authors suggest ways to address this apparent gap in ethics education for psychology graduate students, interns, and fellows.

The article is here.

Thursday, July 20, 2017

A Proposal for a Scientifically-Informed and Instrumentalist Account of Free Will and Voluntary Action

Eric Racine
Frontiers in Psychology, 17 May 2017

Here is an excerpt:

From the perspective of applied ethics and social behavior, voluntariness is a key dimension in the understanding of autonomous decisions and actions as well as our responsibility toward and ownership of these decisions and actions (Dworkin, 1988; Wegner, 2002). Autonomous decisions and actions imply that the agent is initiating them according to his or her own wishes and that the person is free to do so (i.e., not under direct or indirect forms of coercion that would imperil the existence of such an ability). Accordingly, in applied ethics, voluntariness commonly refers to “the degree that [the moral agent] wills the action without being under the control of another's influence” (Beauchamp and Childress, 2001). Indeed, if moral agents have a jeopardized ability, or even lack the ability to initiate actions freely, then neither can they be faulted for their own actions (responsibility) nor encouraged to undertake actions on the premise of their expression of their own preferences (autonomy; Felsen and Reiner, 2011; Castelo et al., 2012). The concept of FW commonly captures a basic form of agency and a responsibility associated with this ability to self-control and initiate voluntary action (Roskies, 2006; Brass et al., 2013). Accordingly, in this paper, FW designates primarily a basic ability to envision options and choose between them such that the will or volition of the person is considered to be free.

The article is here.

Editor's note: The concept of free will is a main concern in psychotherapy.  How autonomous is your patient's behavior?

Wednesday, July 19, 2017

Phenomenological Approaches to Ethics and Information Technology

Lucas Introna
Stanford Encyclopedia of Philosophy

Here is an excerpt:

3.1 The Impact of Information Technology and the Application of Ethical Theory

Much of the ethical debate about computers and information technology more generally has been informed by the tool and impact view of information technology (discussed in section 1.1 above). Within this tradition a number of issues have emerged as important. For example, whether computers (or information and communication technology more generally) generate new types of ethical problems that require new or different ethical theories or whether it is just more of the same (Gorniak 1996). These debates are often expressed in the language of the impact of information technology on particular values and rights (Johnson 1985, 1994). Thus, within this approach we have discussions about the impact of CCTV or web cookies on the right to privacy, the impact of the digital divide on the right to access information, the impact of the piracy of software on property rights, and so forth. In these debates Jim Moor (1985) has argued that computers show up policy vacuums that require new thinking and the establishment of new policies. Others have argued that the resources provided by classical ethical theory such as utilitarianism, consequentialism and deontological ethics is more than enough to deal with all the ethical issues emerging from our design and use of information technology (Gert 1999).

The entry is here.

Editor's Note: Yes, I use the cut and paste function frequently, and in this entry as well.

Tuesday, July 18, 2017

Responding to whistleblower’s claims, Duke admits research data falsification

Ray Gronberg
The Herald-Sun
Originally published July 2, 2017

In-house investigators at Duke University believe a former lab tech falsified or fabricated data that went into 29 medical research reports, lawyers for the university say in their answer to a federal whistleblower lawsuit against it.

Duke’s admissions concern the work of Erin Potts-Kant, and a probe it began in 2013 when she was implicated in an otherwise-unrelated embezzlement. The lawsuit, from former lab analyst Joseph Thomas, contends Duke and some of its professors used the phony data to fraudulently obtain federal research grants. He also alleges they ignored warning signs about Potts-Kants’ work, and tried to cover up the fraud.

The university’s lawyers have tried to get the case dismissed, but in April, a federal judge said it can go ahead. The latest filings thus represent Duke’s first answer to the substance of Thomas’ allegations.

Up front, it said Potts-Kant told a Duke investigating committee that she’d faked data that wound up being “included in various publications and grant applications.”

The article is here.

Human decisions in moral dilemmas are largely described by Utilitarianism

Anja Faulhaber, Anke Dittmer, Felix Blind, and others

Abstract

Ethical thought experiments such as the trolley dilemma have been investigated extensively in the past, showing that humans act in a utilitarian way, trying to cause as little overall damage as possible. These trolley dilemmas have gained renewed attention over the past years; especially due to the necessity of implementing moral decisions in autonomous driving vehicles (ADVs). We conducted a set of experiments in which participants experienced modified trolley dilemmas as the driver in a virtual reality environment. Participants had to make decisions between two discrete options: driving on one of two lanes where different obstacles came into view. Obstacles included a variety of human-like avatars of different ages and group sizes. Furthermore, we tested the influence of a sidewalk as a potential safe harbor and a condition implicating a self-sacrifice. Results showed that subjects, in general, decided in a utilitarian manner, sparing the highest number of avatars possible with a limited influence of the other variables. Our findings support that people’s behavior is in line with the utilitarian approach to moral decision making. This may serve as a guideline for the
implementation of moral decisions in ADVs.

The article is here.

Monday, July 17, 2017

Childhood Firearm Injuries in the United States

Katherine A. Fowler, Linda L. Dahlberg, Tadesse Haileyesus, Carmen Gutierrez, Sarah Bacon
Pediatrics
July 2017, VOLUME 140 / ISSUE 1

RESULTS: Nearly 1300 children die and 5790 are treated for gunshot wounds each year. Boys, older children, and minorities are disproportionately affected. Although unintentional firearm deaths among children declined from 2002 to 2014 and firearm homicides declined from 2007 to 2014, firearm suicides decreased between 2002 and 2007 and then showed a significant upward trend from 2007 to 2014. Rates of firearm homicide among children are higher in many Southern states and parts of the Midwest relative to other parts of the country. Firearm suicides are more dispersed across the United States with some of the highest rates occurring in Western states. Firearm homicides of younger children often occurred in multivictim events and involved intimate partner or family conflict; older children more often died in the context of crime and violence. Firearm suicides were often precipitated by situational and relationship problems. The shooter playing with a gun was the most common circumstance surrounding unintentional firearm deaths of both younger and older children.


CONCLUSIONS: Firearm injuries are an important public health problem, contributing substantially to premature death and disability of children. Understanding their nature and impact is a first step toward prevention.

The article is here.

The ethics of brain implants and ‘brainjacking’

Chelsey Ballarte
Geek Wire
Originally published June 29, 2017

Here is an excerpt:

Fetz and the report’s other authors say we should regard advancements in machine learning and artificial intelligence with the same measure of caution we use when we consider accountability for self-driving cars and privacy for smartphones.

Fetz recalled the time security researchers proved they could hack into a Jeep Cherokee over the internet and disable it as it drove on the freeway. He said that in the world of prosthetics, a hacker could conceivably take over someone’s arm.

“The hack could override the signals,” he said. It could even override a veto, and that’s the danger. The strategy to head off that scenario would have to be to make sure the system can’t be influenced from the outside.

Study co-author John Donoghue, a director of the Wyss Center for Bio and Neuroengineering in Geneva, said these are just a few things we would have to think about if these mechanisms became the norm.

“We must carefully consider the consequences of living alongside semi-intelligent, brain-controlled machines, and we should be ready with mechanisms to ensure their safe and ethical use,” he said in a news release.

Donoghue said that as technology advances, we need to be ready to think about how our current laws would apply. “Our aim is to ensure that appropriate legislation keeps pace with this rapidly progressing field,” he said.

The article is here.

Sunday, July 16, 2017

Masked Marketing: Pharmaceutical Company Funding of ADHD Patient Advocacy Groups

Marnie Klein
Hastings Center
Originally posted June 29, 2017

In 1971, the United Nations passed a resolution prohibiting its member nations from advertising psychotropic drugs to the general public. More than 40 years later, this resolution has done little to halt pharmaceutical companies from marketing stimulants as treatments for attention deficit-hyperactivity disorder. The means by which, and the ethical dilemmas involved when, pharmaceutical companies market their products was discussed earlier this month at the annual PharmedOut conference, which investigated how industry influences medical discourse.

Alan Schwarz, the author of ADHD Nation, exposed how drug companies have, often covertly, sponsored educational resources and patient advocacy groups. These groups face a difficult conflict of interest: by accepting drug company funding, they can increase their reach to those looking for resources; however, their neutrality is compromised, particularly when they fail to disclose the funding source. The New England Journal of Medicine reports that pharmaceutical industry-sponsored advocacy groups may be likely to support drugs, as well as policy proposals, that cater to their sponsors’ financial interests.

One such pharmaceutical company is Shire. One of the British company’s highest-grossing products is Adderall, a stimulant used in treating ADHD that has earned the company billions in sales to date. Shire sponsors ADHD patient-advocacy groups, like Children and Adults with ADHD (CHADD).

The article is here.

Saturday, July 15, 2017

How do self-interest and other-need interact in the brain to determine altruistic behavior?

Jie Hu, Yue Li, Yunlu Yin, Philip R. Blue, Hongbo Yu, Xiaolin Zhou
NeuroImage
Volume 157, 15 August 2017, Pages 598–611

Abstract

Altruistic behavior, i.e., promoting the welfare of others at a cost to oneself, is subserved by the integration of various social, affective, and economic factors represented in extensive brain regions. However, it is unclear how different regions interact to process/integrate information regarding the helper's interest and recipient's need when deciding whether to behave altruistically. Here we combined an interactive game with functional Magnetic Resonance Imaging (fMRI) and transcranial direct current stimulation (tDCS) to characterize the neural network underlying the processing/integration of self-interest and other-need. At the behavioral level, high self-risk decreased helping behavior and high other-need increased helping behavior. At the neural level, activity in medial prefrontal cortex (MPFC) and right dorsolateral prefrontal cortex (rDLPFC) were positively associated with self-risk levels, and activity in right inferior parietal lobe (rIPL) and rDLPFC were negatively associated with other-need levels. Dynamic causal modeling further suggested that both MPFC and rIPL were extrinsically connected to rDLPFC; high self-risk enhanced the effective connectivity from MPFC to rDLPFC, and the modulatory effect of other-need on the connectivity from rIPL to rDLPFC positively correlated with the modulatory effect of other-need on individuals’ helping rate. Two tDCS experiments provided causal evidence that rDLPFC affects both self-interest and other-need concerns, and rIPL selectively affects the other-need concerns. These findings suggest a crucial role of the MPFC-IPL-DLPFC network during altruistic decision-making, with rDLPFC as a central node for integrating and modulating motives regarding self-interest and other-need.

The article is here.

Friday, July 14, 2017

The Moral Value of Compassion

Alfred Archer
Forthcoming in Justin Caouette and Carolyn Price (Eds.) The Moral Psychology of Compassion

Introduction

Many people think that compassion has an important role to play in our moral lives. We might
even think, as Arthur Schopenhauer (2010 [1840]) did, that compassion is the basis of morality.
More modestly, we might think that compassion is one important source of moral motivation and
would play an important role in the life of a virtuous person. Recently, however philosophers such
as Roger Crisp (2008), and Jesse Prinz (2011) and psychologists such as Paul Bloom (2016) have
called into question the value of sharing in another’s suffering. All three argue that this should not
play a significant role in the life of the morally virtuous person. In its place, Crisp endorses rational
benevolence as the central form of moral motivation for virtuous people.

The issue of whether compassion is a superior form of motivation to rational benevolence is
important for at least two reasons. First, it is important for both ethics and political theory. Care
ethicists for example, seek to defend moral and political outlooks based on compassion. Carol
Gilligan, for instance, claims that care ethics is “tied to feelings of empathy and compassion” (1982,
69). Similarly, Elizabeth Porter (2006) argues in favour of basing politics on compassion. These
appeals are only plausible if we accept that compassion is a valuable part of morality. Second, the
issue of whether or not compassion plays a valuable role in morality is also important for moral
education. Whether or not we see compassion as having a valuable role here is likely to be largely
settled by the issue of whether compassion plays a useful role in our moral lives.

I will argue that despite the problems facing compassion, it has a distinctive role to play in moral
life that cannot be fully captured by rational benevolence. My discussion will proceed as follows. In
§1, I examine the nature of compassion and explain how I will be using the term in this paper. I
will then, in §2, explain the traditional account of the value of compassion as a source of moral
motivation. In §3, I will investigate a number of challenges to the value of compassionate moral
motivation. I will then, in §4, explain why, despite facing important problems, compassion has a
distinctive role to play in moral life.

The penultimate version is here.

Social Mission in Health Professions Education: Beyond Flexner

Fitzhugh Mullan
JAMA: Viewpoint
Originally published June 26, 2017

Here is an excerpt:

Today, with a broader recognition of the importance of social determinants of health and a better understanding of the substantial health disparities within the United States, new ideas are circulating and important experiments in curricular redesign are taking place at many schools. Accountable care organizations, primary care medical homes, interprofessional education, cost consciousness, and teaching health centers are all present to some degree in the curricula of health professions schools and teaching hospitals, and all have dimensions of social mission. These developments are encouraging, but the creative focus on social mission that they represent needs to be widely embraced, becoming a core value of all health professions educational institutions, including schools, teaching hospitals, and postgraduate training programs.

Toward that end, the unqualified commitment of these institutions to teaching and modeling social mission is needed, as are the voices of academic professional organizations, accrediting bodies, and student groups who have important roles in defining the values of young professionals. The task is interprofessional and should involve other disciplines including nursing, dentistry, public health, physician assistants, and, perhaps, law and social work. The commitments needed are not the domain of any one profession, and collaborative initiatives at the educational level will reinforce social mission norms in practice. The precision with which health disparities and the morbidity and mortality that they represent can be documented calls on all health professions schools, academic health centers, and teaching hospital to place their commitment to social mission alongside their dedication to education, research, and service in pursuit of a healthier and fairer society.

The article is here.

Thursday, July 13, 2017

Professors lead call for ethical framework for new 'mind control' technologies

Medical Xpress
Originally published July 6, 2017

Here is an excerpt:

As advances in molecular biology and chemical engineering are increasing the precision of pharmaceuticals, even more spatially-targeted technologies are emerging. New noninvasive treatments send electrical currents or magnetic waves through the scalp, altering the ability of neurons in a targeted region to fire. Surgical interventions are even more precise; they include implanted electrodes that are designed to quell seizures before they spread, or stimulate the recall of memories after a traumatic brain injury.

Research into the brain's "wiring"—how neurons are physically connected in networks that span disparate parts of the brain—and how this wiring relates to changing mental states has enabled principles from control theory to be applied to neuroscience. For example, a recent study by Bassett and colleagues shows how changes in brain wiring from childhood through adolescence leads to greater executive function, or the ability to consciously control one's thoughts and attention.

While insights from network science and control theory may support new treatments for conditions like obsessive compulsive disorder and traumatic brain injury, the researchers argue that clinicians and bioethicists must be involved in the earliest stages of their development. As the positive effects of treatments become more profound, so do their potential side effects.

"New methods of controlling mental states will provide greater precision in treatments," Sinnott-Armstrong said, "and we thus need to think hard about the ensuing ethical issues regarding autonomy, privacy, equality and enhancement."

The article is here.

The Only Way Is Ethics: Why Good People do Bad Thing and How To Stop Us

www.ethicalsystems.org
MindGym

Forward

In social psychology we have this thing called the ‘fundamental attribution error.’ It refers
to the fact that when people see somebody do something unusual, their first reaction
is to assume that the act expressed the person’s internal values or personality (“he’s such
a crook!”), and underestimate the power of external factors and pressures. So, when we
hear about a company brought down by an ethics scandal, we immediately search for
the culprits, the bad actors, the bad apples. We can almost always find them, fire them,
maybe indict them, and move on… to the next scandal.

Sometimes a scandal is caused by one psychopath or sleazebag in the C-suite. But
usually not. If you really want to understand the causes of cheating, risky and unethical
behavior within complex organizations, you have to get past this attributional error
and examine the barrel, not just the apples in the barrel. You have to learn some social
psychology, which is like putting on a pair of magic glasses that let you see social
forces and cognitive biases in action.

Once you see how profoundly we are all shaped by local organizational culture, and how
clueless we often are about the real causes behind our actions, you can begin to work
with human psychology, adapt your processes to it, and obtain far better results.
Mind Gym shines a spotlight on this challenge in this whitepaper. A great deal of their
evidence shows that having ethics pays, yet most organizations focus on compliance,
rather than on ethics. Mind Gym offers you a set of tools and a framework to begin
diagnosing your own organization. And they offer concrete advice for improvement.
It is crucial that your organization is aligned on ethics at all levels – you may not see
results from just changing one or two processes. If you want to run a great organization
that employees are proud to work for, and that customers buy from with high trust, then
you should consider making an all-out commitment to ethics. You should consider
doing ethical systems design.

The White Paper can be downloaded here.

Wednesday, July 12, 2017

Emotion shapes the diffusion of moralized content in social networks

William J. Brady, Julian A. Wills, John T. Jost, Joshua A. Tucker, and Jay J. Van Bavel
PNAS 2017 ; published ahead of print June 26, 2017

Abstract

Political debate concerning moralized issues is increasingly common in online social networks. However, moral psychology has yet to incorporate the study of social networks to investigate processes by which some moral ideas spread more rapidly or broadly than others. Here, we show that the expression of moral emotion is key for the spread of moral and political ideas in online social networks, a process we call “moral contagion.” Using a large sample of social media communications about three polarizing moral/political issues (n = 563,312), we observed that the presence of moral-emotional words in messages increased their diffusion by a factor of 20% for each additional word. Furthermore, we found that moral contagion was bounded by group membership; moral-emotional language increased diffusion more strongly within liberal and conservative networks, and less between them. Our results highlight the importance of emotion in the social transmission of moral ideas and also demonstrate the utility of social network methods for studying morality. These findings offer insights into how people are exposed to moral and political ideas through social networks, thus expanding models of social influence and group polarization as people become increasingly immersed in social media networks.

The research is here.

Suicide and self-harm in prisons hit worst ever levels

Rajeev Syal
The Guardian
Originally posted June 28, 2017

Prisons have “struggled to cope” with record rates of suicide and self-harm among inmates following cuts to funding and staff numbers, the public spending watchdog has said. The National Audit Office said it remains unclear how the authorities will meet aims for improving prisoners’ mental health or get value for money because of a lack of relevant data.

Auditors said that self-harm incidents increased by 73% between 2012 and 2016 to 40,161, while the 120 self-inflicted deaths in prison in 2016 was the highest figure on record and almost double that for 2012. Since 2010, when David Cameron became prime minister, funding of offender management has been reduced by 13%, while staff numbers have been cut by 30%, the report said.

The article is here.

Tuesday, July 11, 2017

Moral Judgments and Social Stereotypes: Do the Age and Gender of the Perpetrator and the Victim Matter?

Qiao Chu, Daniel Grühn
Social Psychological and Personality Science
First Published June 19, 2017

Abstract
We investigated how moral judgments were influenced by (a) the age and gender of the moral perpetrator and victim, (b) the moral judge’s benevolent ageism and benevolent sexism, and (c) the moral judge’s gender. By systematically manipulating the age and gender of the perpetrators and victims in moral scenarios, participants in two studies made judgments about the moral transgressions. We found that (a) people made more negative judgments when the victims were old or female rather than young or male, (b) benevolent ageism influenced people’s judgments about young versus old perpetrators, and (c) people had differential moral expectations of perpetrators who belonged to their same-gender group versus opposite-gender group. The findings suggest that age and gender stereotypes are so salient to bias people’s moral judgments even when the transgression is undoubtedly intentional and hostile.

The article is here.

Men Can Be So Hormonal

Therese Huston
The New York Times
Originally posted June 24, 2017

Here is an excerpt:

People don’t like to believe that they’re average. But compared with women, men tend to think they’re much better than average.

If you feel your judgment is right, are you interested in how others see the problem? Probably not. Nicholas D. Wright, a neuroscientist at the University of Birmingham in Britain, studies how fluctuations in testosterone shape one’s willingness to collaborate.  Most testosterone researchers study men, for obvious reasons, but Dr. Wright and his team focus on women. They asked women to perform a challenging perceptual task: detecting where a fuzzy pattern had appeared on a busy computer screen. When women took oral testosterone, they were more likely to ignore the input of others, compared with women in the placebo condition. Amped up on testosterone, they relied more heavily on their own judgment, even when they were wrong.

The findings of the latest study, which have been presented at conferences and will be published in Psychological Science in January, offer more reasons to worry about testosterone supplements.

The article is here.

Monday, July 10, 2017

When Are Doctors Too Old to Practice?

By Lucette Lagnado
The Wall Street Journal
Originally posted June 24, 2017

Here is an excerpt:

Testing older physicians for mental and physical ability is growing more common. Nearly a fourth of physicians in America are 65 or older, and 40% of these are actively involved in patient care, according to the American Medical Association. Experts at the AMA have suggested that they be screened lest they pose a risk to patients. An AMA working group is considering guidelines.

Concern over older physicians' mental states--and whether it is safe for them to care for patients--has prompted a number of institutions, from Stanford Health Care in Palo Alto, Calif., to Driscoll Children's Hospital in Corpus Christi, Texas, to the University of Virginia Health System, to adopt age-related physician policies in recent years. The goal is to spot problems, in particular signs of cognitive decline or dementia.

Now, as more institutions like Cooper embrace the measures, they are roiling some older doctors and raising questions of fairness, scientific validity--and ageism.

"It is not for the faint of heart, this policy," said Ann Weinacker, 66, the former chief of staff at the hospital and professor of medicine at Stanford University who has overseen the controversial efforts to implement age-related screening at Stanford hospital.

A group of doctors has been battling Stanford's age-based physician policies for the past five years, contending they are demeaning and discriminatory. The older doctors got the medical staff to scrap a mental-competency exam aimed at testing for cognitive impairment. Most, like Frank Stockdale, an 81-year-old breast-cancer specialist, refused to take it.

The article is here.

Big Pharma gives your doctor gifts. Then your doctor gives you Big Pharma’s drugs

Nicole Van Groningen
The Washington Post
Originally posted June 13, 2017

Here is an excerpt:

The losers in this pharmaceutical industry-physician interaction are, of course, patients. The high costs of branded drugs are revenue to drug companies, but out-of-pocket expenses to health-care consumers. Almost a quarter of Americans who take prescription drugs report that they have difficulty affording their medications, and the high costs of these drugs is a leading reason that patients can’t adhere to them. Most branded drugs offer minimal — if any — benefit over generic formulations. And if doctors prescribe brand-name drugs that are prohibitively more expensive than generic options, patients might forgo the medications altogether — causing greater harm.

On a national scale, the financial burden imposed by branded drugs is enormous. Current estimates place our prescription drug spending at more than $400 billion annually, and branded drugs are almost entirely to blame: Though they constitute only 10 percent of prescriptions, they account for 72 percent of total drug spending. Even modest reductions in our use of branded prescription drugs — on par with the roughly 8 percent relative reduction seen in the JAMA study — could translate to billions of dollars in national health-care savings.

The article is here.

Sunday, July 9, 2017

Letter from the American Medical Association to McConnell and Schumer

James Madera
Letter from the American Medical Association
Sent June 26, 2017

To: Senators McConnell and Schumer

On behalf of the physician and medical student members of the American Medical Association
(AMA), I am writing to express our opposition to the discussion draft of the “Better Care
Reconciliation Act” released on June 22, 2017. Medicine has long operated under the precept of
Primum non nocere, or “first, do no harm.” The draft legislation violates that standard on many
levels.

In our January 3, 2017 letter to you, and in subsequent communications, we have consistently
urged that the Senate, in developing proposals to replace portions of the current law, pay special
attention to ensure that individuals currently covered do not lose access to affordable, quality
health insurance coverage. In addition, we have advocated for the sufficient funding of Medicaid
and other safety net programs and urged steps to promote stability in the individual market.
Though we await additional analysis of the proposal, it seems highly likely that a combination of
smaller subsidies resulting from lower benchmarks and the increased likelihood of waivers of
important protections such as required benefits, actuarial value standards, and out of pocket
spending limits will expose low and middle income patients to higher costs and greater difficulty
in affording care.

The AMA is particularly concerned with proposals to convert the Medicaid program into a
system that limits the federal obligation to care for needy patients to a predetermined formula
based on per-capita-caps.

The entire letter is here.

Saturday, July 8, 2017

The Ethics of CRISPR

Noah Robischon
Fast Company
Originally published on June 20, 2017

On the eve of publishing her new book, Jennifer Doudna, a pioneer in the field of CRISPR-Cas9 biology and genome engineering, spoke with Fast Company about the potential for this new technology to be used for good or evil.

“The worst thing that could happen would be for [CRISPR] technology to be speeding ahead in laboratories,” Doudna tells Fast Company. “Meanwhile, people are unaware of the impact that’s coming down the road.” That’s why Doudna and her colleagues have been raising awareness of the following issues.

DESIGNER HUMANS

Editing sperm cells or eggs—known as germline manipulation—would introduce inheritable genetic changes at inception. This could be used to eliminate genetic diseases, but it could also be a way to ensure that your offspring have blue eyes, say, and a high IQ. As a result, several scientific organizations and the National Institutes of Health have called for a moratorium on such experimentation. But, writes Doudna, “it’s almost certain that germline editing will eventually be safe enough to use in the clinic.”

The article is here.

Israeli education minister's ethics code would bar professors from expressing political opinions

Yarden Skop
Haaretz
Originally posted June 10, 2017

An ethics code devised at Education Minister Naftali Bennett's behest would bar professors from expressing political opinions, it emerged Friday.

The code, put together by Asa Kasher, an ethics and philosophy professor at Tel Aviv University, would also forbid staff from calling for an academic boycott of Israel.

Bennett had asked Kasher a few months ago to write a set of rules for appropriate political conduct at academic institutions. Kasher had written the Israel Defense Forces' ethics code.
The contents of the document, which were first reported by the Yedioth Ahronoth newspaper on Friday, will soon be submitted for the approval of the Council for Higher Education.

The article is here.

Friday, July 7, 2017

Federal ethics chief resigns after clashes with Trump

Lauren Rosenblatt
The Los Angeles Times
Originally posted July 6, 2017

Walter Shaub Jr., director of the U.S. Office of Government Ethics, announced Thursday he would resign, following a rocky relationship with President Trump and repeated confrontations with the administration.

Shaub, appointed by President Obama in 2013, had unsuccessfully pressed Trump to divest his business interests to avoid potential conflicts of interest, something Trump refused to do.

The ethics watchdog also engaged in a public battle with the White House over his demands for more information about former lobbyists and other appointees who had been granted waivers from ethics rules. After initially balking, the White House eventually released the requested information about the waivers.

Shaub called for a harsher punishment for presidential advisor Kellyanne Conway after she flouted ethics rules by publicly endorsing Ivanka Trump’s clothing line during a television appearance.

The article is here.

Is The Concern Artificial Intelligence — Or Autonomy?

Alva Noe
npr.org
Originally posted June 16, 2017

Here is an excerpt:

The big problem AI faces is not the intelligence part, really. It's the autonomy part. Finally, at the end of the day, even the smartest computers are tools, our tools — and their intentions are our intentions. Or, to the extent that we can speak of their intentions at all — for example of the intention of a self-driving car to avoid an obstacle — we have in mind something it was designed to do.

Even the most primitive organism, in contrast, at least seems to have a kind of autonomy. It really has its own interests. Light. Food. Survival. Life.

The danger of our growing dependence on technologies is not really that we are losing our natural autonomy in quite this sense. Our needs are still our needs. But it is a loss of autonomy, nonetheless. Even auto mechanics these days rely on diagnostic computers and, in the era of self-driving cars, will any of us still know how to drive? Think what would happen if we lost electricity, or if the grid were really and truly hacked? We'd be thrown back into the 19th century, as Dennett says. But in many ways, things would be worse. We'd be thrown back — but without the knowledge and know-how that made it possible for our ancestors to thrive in the olden days.

I don't think this fear is unrealistic. But we need to put it in context.

The article is here.

Thursday, July 6, 2017

The Torturers Speak

The Editorial Board
The New York Times
Originally posted June 23, 2017

It’s hard to watch the videotaped depositions of the two former military psychologists who, working as independent contractors, designed, oversaw and helped carry out the “enhanced interrogation” of detainees held at C.I.A. black sites in the months after the Sept. 11 terror attacks.

The men, Bruce Jessen and James Mitchell, strike a professional pose. Dressed in suits and ties, speaking matter-of-factly, they describe the barbaric acts they and others inflicted on the captives, who were swept up indiscriminately and then waterboarded, slammed into walls, locked in coffins and more — all in the hunt for intelligence that few, if any, of them possessed.

One died of apparent hypothermia.

Many others were ultimately released without charge.

When pushed to confront the horror and uselessness of what they had done, the psychologists fell back on one of the oldest justifications of wartime. “We were soldiers doing what we were instructed to do,” Dr. Jessen said.

Perhaps, but they were also soldiers whose contracting business was paid more than $81 million.

The information is here.

What the Rise of Sentient Robots Will Mean for Human Beings

George Musser
NBC
Originally posted June 19, 2017

Here is an excerpt:

“People expect that self-awareness is going to be this end game of artificial intelligence when really there are no scientific pursuits where you start at the end,” says Justin Hart, a computer scientist at the University of Texas. He and other researchers are already building machines with rudimentary minds. One robot wriggles like a newborn baby to understand its body. Another robot babbles about what it sees and cries when you hit it. Another sets off to explore its world on its own.

No one claims that robots have a rich inner experience — that they have pride in floors they've vacuumed or delight in the taste of 120-volt current. But robots can now exhibit some similar qualities to the human mind, including empathy, adaptability, and gumption.

Beyond it just being cool to create robots, researchers design these cybernetic creatures because they’re trying to fix flaws in machine-learning systems. Though these systems may be powerful, they are opaque. They work by relating input to output, like a test where you match items in column ‘A’ with items in column ‘B’. The AI systems basically memorize these associations. There’s no deeper logic behind the answers they give. And that’s a problem.

Humans can also be hard to read. We spend an inordinate amount of time analyzing ourselves and others, and arguably, that’s the main role of our conscious minds. If machines had minds, they might not be so inscrutable. We could simply ask them why they did what they did.

The article is here.

Wednesday, July 5, 2017

Chief executives who lack ethics should be more afraid of public opinion than ever

Emma Koehn
Smart Company
Originally posted June 16, 2017

The age of the internet has made it near impossible for companies to hide when someone in their organisation makes a major blunder, and the research indicates the world is now tougher on bosses who stuff up than ever before.

PriceWaterhouseCoopers partners Kristin Rivera and Per Ola-Karlsson suggest in Harvard Business Review this week that the numbers don’t lie: more chief executives are being fired for “ethical blunders” than ever before, with scrutiny from both customers and shareholders accelerating.

The pair examine the numbers from PwC’s most recent global chief executive success study, which suggests the number of company heads who were dismissed for ethical lapses increased from 3.9% in the four years preceding 2012 to 5.3% at the end of 2016.

“Firstly, the public has become more suspicious, more critical and less forgiving of corporate misbehaviour,” Rivera and Karlsson say.

“Second, governance and regulation in many countries has become both more proactive and more punitive.”

The article is here.

DOJ corporate compliance watchdog resigns citing Trump's conduct

Olivia Beavers
The Hill
Originally published July 2, 2017

A top Justice Department official who serves as a corporate compliance watchdog has left her job, saying she felt she could no longer force companies to comply with the government's ethics laws when members of the administration she works for have conducted themselves in a manner that she claims would not be tolerated.

Hui Chen had served in the department’s compliance counsel office from November 2015 until she resigned in June, breaking her silence in a LinkedIn post last week highlighted by The International Business Times, which points to the Trump administration’s behavior as the reason for her job change.

“To sit across the table from companies and question how committed they were to ethics and compliance felt not only hypocritical, but very much like shuffling the deck chair on the Titanic," Chen wrote.

The article is here.

Tuesday, July 4, 2017

Psychologists Open a Window on Brutal C.I.A. Interrogations A Lawsuit Filed on Behalf of Former Prisoners Reveals New Details

Sheri Fink & James Risen
The New York Times
Originally posted June 21, 2017

Fifteen years after he helped devise the brutal interrogation techniques used on terrorism suspects in secret C.I.A. prisons, John Bruce Jessen, a former military psychologist, expressed ambivalence about the program.

He described himself and a fellow military psychologist, James Mitchell, as reluctant participants in using the techniques, some of which are widely viewed as torture, but also justified the practices as effective in getting resistant detainees to cooperate.

“I think any normal, conscionable man would have to consider carefully doing something like this,” Dr. Jessen said in a newly disclosed deposition. “I deliberated with great, soulful torment about this, and obviously I concluded that it could be done safely or I wouldn’t have done it.”

The two psychologists — whom C.I.A. officials have called architects of the interrogation program, a designation they dispute — are defendants in the only lawsuit that may hold participants accountable for causing harm.

The program has been well documented, but under deposition, with a camera focused on their faces, Drs. Jessen and Mitchell provided new details about the interrogation effort, their roles in it and their rationales. Their accounts were sometimes at odds with their own correspondence at the time, as well as previous portrayals of them by officials and other interrogators as eager participants in the program.

The article is here.

Nuremberg Betrayed: Human Experimentation and the CIA Torture Program

Sarah Dougherty and Scott A. Allen
Physicians for Human Rights
June 2017

Based on an analysis of thousands of pages of documents and years of research, Physicians for Human Rights shows that the CIA’s post-9/11 torture program constituted an illegal, unethical regime of experimental research on unwilling human subjects, testing the flawed hypothesis that torture could aid interrogators in breaking the resistance of detainees. In “Nuremberg Betrayed: Human Experimentation and the CIA Torture Program,” PHR researchers show that CIA contract psychologists James Mitchell and Bruce Jessen created a research program in which health professionals designed and applied torture techniques and collected data on torture’s effects. This constitutes one of the gravest breaches of medical ethics by U.S. health personnel since the Nuremberg Code was developed in the wake of Nazi medical atrocities committed during World War Two.

Delving into the role health professionals played in designing and implementing torture, the report uses newly released documents to show how the results of untested, brutal torture techniques were used to calibrate the machinery of the torture program. The large-scale experiment’s flawed findings were also used by Bush administration lawyers to create spurious legal cover for the entire program.

PHR calls on all medical and scientific communities to convene a commission to lay out what is known about the torture program, including the participation of health professionals, and urges the Trump administration to launch a criminal investigation to get a full accounting of the crimes committed by the CIA and other government agencies.

The report is here.

Monday, July 3, 2017

How Scientists are Working to Create Cyborg Humans with Super Intelligence

Hannah Osborne
Newsweek
Originally posted on June 14, 2017

Here is an excerpt:

There are three main approaches to doing this. The first involves recording information from the brain, decoding it via a computer or machine interface, and then utilizing the information for a purpose.

The second is to influence the brain by stimulating it pharmacologically or electrically: “So you can stimulate the brain to produce artificial sensations, like the sensation of touch, or vision for the blind,” he says. “Or you could stimulate certain areas to improve their functions—like improved memory, attention. You can even connect two brains together—one brain will stimulate the other—like where scientists transferred memories of one rat to another.”

The final approach is defined as “futuristic.” This would include humans becoming cyborgs, for example, and would raise the ethical and philosophical questions that will need to be addressed before scientists merge man and machine.

Lebedev said these ethical concerns could become real in the next 10 years, but the current technology poses no serious threat.

The article is here.

Sunday, July 2, 2017

Religious doctors who don’t want to refer patients for assisted dying have launched a hopeless court case

Derek Smith
Special to National Post 
Originally posted June 12, 2017

In a case being heard this week in an Ontario divisional court, a group of Christian doctors have launched a constitutional challenge against the College of Physicians and Surgeons of Ontario. The college requires religious doctors who refuse to offer medical assistance in dying (MAID) to give an “effective referral” so that the patient can receive the procedure from a willing doctor nearby.

The doctors say that the college has limited their religious freedom under the Charter of Rights and Freedoms unjustifiably. They argue that a referral endorses the procedure and helps kill, breaking God’s commandment. In their view, patients should have to find willing doctors themselves and “self-refer,” sparing religious objectors from sin and a guilty conscience.

The college should certainly accommodate religious objectors more than it currently does, but the lawsuit will likely fail. It deserves to fail.

Religious freedom sometimes has to yield to laws that prevent religious people from harming others. The Supreme Court of Canada has emphasized this in limiting religious freedom on a wide range of topics, including denials of blood transfusions, witnesses wearing niqabs in criminal trials, child custody disputes, accountability for unaccredited church schools and bans on Sunday shopping.

The article is here.

Saturday, July 1, 2017

Hypocritical Flip-Flop, or Courageous Evolution? When Leaders Change Their Moral Minds.

Kreps, Tamar A.; Laurin, Kristin; Merritt, Anna C.
Journal of Personality and Social Psychology, Jun 08 , 2017

Abstract

How do audiences react to leaders who change their opinion after taking moral stances? We propose that people believe moral stances are stronger commitments, compared with pragmatic stances; we therefore explore whether and when audiences believe those commitments can be broken. We find that audiences believe moral commitments should not be broken, and thus that they deride as hypocritical leaders who claim a moral commitment and later change their views. Moreover, they view them as less effective and less worthy of support. Although participants found a moral mind changer especially hypocritical when they disagreed with the new view, the effect persisted even among participants who fully endorsed the new view. We draw these conclusions from analyses and meta-analyses of 15 studies (total N = 5,552), using recent statistical advances to verify the robustness of our findings. In several of our studies, we also test for various possible moderators of these effects; overall we find only 1 promising finding: some evidence that 2 specific justifications for moral mind changes—citing a personally transformative experience, or blaming external circumstances rather than acknowledging opinion change—help moral leaders appear more courageous, but no less hypocritical. Together, our findings demonstrate a lay belief that moral views should be stable over time; they also suggest a downside for leaders in using moral framings.

The article is here.

Trump's politicking raises ethics red flags

Julie Bykowicz
The Associated Press
Originally posted on June 27, 2017

Here is an excerpt:

The historically early campaigning comes with clear fundraising benefits, but it has raised red flags. Among them: Government employees have inappropriately crossed over into campaign activities, tax dollars may be subsidizing some aspects of campaign events, and as a constant candidate, the president risks alienating Americans who did not vote for him.

Larry Noble, former general counsel to the Federal Election Commission, said the early campaigning creates plenty of "potential tripwires," adding: "They're going to have to proceed very carefully to avoid violations."

The White House ensures that political entities pay for campaign events, and White House lawyers provide advice to employees to make sure they do not run afoul of rules preventing overtly political activities on government time, spokeswoman Lindsay Walter said Tuesday.

The Trump team has decided that any risks are worth it.

The article is here.