Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Wednesday, July 5, 2017

Chief executives who lack ethics should be more afraid of public opinion than ever

Emma Koehn
Smart Company
Originally posted June 16, 2017

The age of the internet has made it near impossible for companies to hide when someone in their organisation makes a major blunder, and the research indicates the world is now tougher on bosses who stuff up than ever before.

PriceWaterhouseCoopers partners Kristin Rivera and Per Ola-Karlsson suggest in Harvard Business Review this week that the numbers don’t lie: more chief executives are being fired for “ethical blunders” than ever before, with scrutiny from both customers and shareholders accelerating.

The pair examine the numbers from PwC’s most recent global chief executive success study, which suggests the number of company heads who were dismissed for ethical lapses increased from 3.9% in the four years preceding 2012 to 5.3% at the end of 2016.

“Firstly, the public has become more suspicious, more critical and less forgiving of corporate misbehaviour,” Rivera and Karlsson say.

“Second, governance and regulation in many countries has become both more proactive and more punitive.”

The article is here.

DOJ corporate compliance watchdog resigns citing Trump's conduct

Olivia Beavers
The Hill
Originally published July 2, 2017

A top Justice Department official who serves as a corporate compliance watchdog has left her job, saying she felt she could no longer force companies to comply with the government's ethics laws when members of the administration she works for have conducted themselves in a manner that she claims would not be tolerated.

Hui Chen had served in the department’s compliance counsel office from November 2015 until she resigned in June, breaking her silence in a LinkedIn post last week highlighted by The International Business Times, which points to the Trump administration’s behavior as the reason for her job change.

“To sit across the table from companies and question how committed they were to ethics and compliance felt not only hypocritical, but very much like shuffling the deck chair on the Titanic," Chen wrote.

The article is here.

Tuesday, July 4, 2017

Psychologists Open a Window on Brutal C.I.A. Interrogations A Lawsuit Filed on Behalf of Former Prisoners Reveals New Details

Sheri Fink & James Risen
The New York Times
Originally posted June 21, 2017

Fifteen years after he helped devise the brutal interrogation techniques used on terrorism suspects in secret C.I.A. prisons, John Bruce Jessen, a former military psychologist, expressed ambivalence about the program.

He described himself and a fellow military psychologist, James Mitchell, as reluctant participants in using the techniques, some of which are widely viewed as torture, but also justified the practices as effective in getting resistant detainees to cooperate.

“I think any normal, conscionable man would have to consider carefully doing something like this,” Dr. Jessen said in a newly disclosed deposition. “I deliberated with great, soulful torment about this, and obviously I concluded that it could be done safely or I wouldn’t have done it.”

The two psychologists — whom C.I.A. officials have called architects of the interrogation program, a designation they dispute — are defendants in the only lawsuit that may hold participants accountable for causing harm.

The program has been well documented, but under deposition, with a camera focused on their faces, Drs. Jessen and Mitchell provided new details about the interrogation effort, their roles in it and their rationales. Their accounts were sometimes at odds with their own correspondence at the time, as well as previous portrayals of them by officials and other interrogators as eager participants in the program.

The article is here.

Nuremberg Betrayed: Human Experimentation and the CIA Torture Program

Sarah Dougherty and Scott A. Allen
Physicians for Human Rights
June 2017

Based on an analysis of thousands of pages of documents and years of research, Physicians for Human Rights shows that the CIA’s post-9/11 torture program constituted an illegal, unethical regime of experimental research on unwilling human subjects, testing the flawed hypothesis that torture could aid interrogators in breaking the resistance of detainees. In “Nuremberg Betrayed: Human Experimentation and the CIA Torture Program,” PHR researchers show that CIA contract psychologists James Mitchell and Bruce Jessen created a research program in which health professionals designed and applied torture techniques and collected data on torture’s effects. This constitutes one of the gravest breaches of medical ethics by U.S. health personnel since the Nuremberg Code was developed in the wake of Nazi medical atrocities committed during World War Two.

Delving into the role health professionals played in designing and implementing torture, the report uses newly released documents to show how the results of untested, brutal torture techniques were used to calibrate the machinery of the torture program. The large-scale experiment’s flawed findings were also used by Bush administration lawyers to create spurious legal cover for the entire program.

PHR calls on all medical and scientific communities to convene a commission to lay out what is known about the torture program, including the participation of health professionals, and urges the Trump administration to launch a criminal investigation to get a full accounting of the crimes committed by the CIA and other government agencies.

The report is here.

Monday, July 3, 2017

How Scientists are Working to Create Cyborg Humans with Super Intelligence

Hannah Osborne
Newsweek
Originally posted on June 14, 2017

Here is an excerpt:

There are three main approaches to doing this. The first involves recording information from the brain, decoding it via a computer or machine interface, and then utilizing the information for a purpose.

The second is to influence the brain by stimulating it pharmacologically or electrically: “So you can stimulate the brain to produce artificial sensations, like the sensation of touch, or vision for the blind,” he says. “Or you could stimulate certain areas to improve their functions—like improved memory, attention. You can even connect two brains together—one brain will stimulate the other—like where scientists transferred memories of one rat to another.”

The final approach is defined as “futuristic.” This would include humans becoming cyborgs, for example, and would raise the ethical and philosophical questions that will need to be addressed before scientists merge man and machine.

Lebedev said these ethical concerns could become real in the next 10 years, but the current technology poses no serious threat.

The article is here.

Sunday, July 2, 2017

Religious doctors who don’t want to refer patients for assisted dying have launched a hopeless court case

Derek Smith
Special to National Post 
Originally posted June 12, 2017

In a case being heard this week in an Ontario divisional court, a group of Christian doctors have launched a constitutional challenge against the College of Physicians and Surgeons of Ontario. The college requires religious doctors who refuse to offer medical assistance in dying (MAID) to give an “effective referral” so that the patient can receive the procedure from a willing doctor nearby.

The doctors say that the college has limited their religious freedom under the Charter of Rights and Freedoms unjustifiably. They argue that a referral endorses the procedure and helps kill, breaking God’s commandment. In their view, patients should have to find willing doctors themselves and “self-refer,” sparing religious objectors from sin and a guilty conscience.

The college should certainly accommodate religious objectors more than it currently does, but the lawsuit will likely fail. It deserves to fail.

Religious freedom sometimes has to yield to laws that prevent religious people from harming others. The Supreme Court of Canada has emphasized this in limiting religious freedom on a wide range of topics, including denials of blood transfusions, witnesses wearing niqabs in criminal trials, child custody disputes, accountability for unaccredited church schools and bans on Sunday shopping.

The article is here.

Saturday, July 1, 2017

Hypocritical Flip-Flop, or Courageous Evolution? When Leaders Change Their Moral Minds.

Kreps, Tamar A.; Laurin, Kristin; Merritt, Anna C.
Journal of Personality and Social Psychology, Jun 08 , 2017

Abstract

How do audiences react to leaders who change their opinion after taking moral stances? We propose that people believe moral stances are stronger commitments, compared with pragmatic stances; we therefore explore whether and when audiences believe those commitments can be broken. We find that audiences believe moral commitments should not be broken, and thus that they deride as hypocritical leaders who claim a moral commitment and later change their views. Moreover, they view them as less effective and less worthy of support. Although participants found a moral mind changer especially hypocritical when they disagreed with the new view, the effect persisted even among participants who fully endorsed the new view. We draw these conclusions from analyses and meta-analyses of 15 studies (total N = 5,552), using recent statistical advances to verify the robustness of our findings. In several of our studies, we also test for various possible moderators of these effects; overall we find only 1 promising finding: some evidence that 2 specific justifications for moral mind changes—citing a personally transformative experience, or blaming external circumstances rather than acknowledging opinion change—help moral leaders appear more courageous, but no less hypocritical. Together, our findings demonstrate a lay belief that moral views should be stable over time; they also suggest a downside for leaders in using moral framings.

The article is here.

Trump's politicking raises ethics red flags

Julie Bykowicz
The Associated Press
Originally posted on June 27, 2017

Here is an excerpt:

The historically early campaigning comes with clear fundraising benefits, but it has raised red flags. Among them: Government employees have inappropriately crossed over into campaign activities, tax dollars may be subsidizing some aspects of campaign events, and as a constant candidate, the president risks alienating Americans who did not vote for him.

Larry Noble, former general counsel to the Federal Election Commission, said the early campaigning creates plenty of "potential tripwires," adding: "They're going to have to proceed very carefully to avoid violations."

The White House ensures that political entities pay for campaign events, and White House lawyers provide advice to employees to make sure they do not run afoul of rules preventing overtly political activities on government time, spokeswoman Lindsay Walter said Tuesday.

The Trump team has decided that any risks are worth it.

The article is here.

Friday, June 30, 2017

Ethics and Artificial Intelligence With IBM Watson's Rob High

Blake Morgan
Forbes.com
Originally posted June 12, 2017

Artificial intelligence seems to be popping up everywhere, and it has the potential to change nearly everything we know about data and the customer experience. However, it also brings up new issues regarding ethics and privacy.

One of the keys to keeping AI ethical is for it to be transparent, says Rob High, vice president and chief technology officer of IBM Watson. When customers interact with a chatbot, for example, they need to know they are communicating with a machine and not an actual human. AI, like most other technology tools, is most effective when it is used to extend the natural capabilities of humans instead of replacing them. That means that AI and humans are best when they work together and can trust each other.

Chatbots are one of the most commonly used forms of AI. Although they can be used successfully in many ways, there is still a lot of room for growth. As they currently stand, chatbots mostly perform basic actions like turning on lights, providing directions, and answering simple questions that a person asks directly. However, in the future, chatbots should and will be able to go deeper to find the root of the problem. For example, a person asking a chatbot what her bank balance is might be asking the question because she wants to invest money or make a big purchase—a futuristic chatbot could find the real reason she is asking and turn it into a more developed conversation. In order to do that, chatbots will need to ask more questions and drill deeper, and humans need to feel comfortable providing their information to machines.

The article is here.