Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Saturday, October 7, 2017

Trump Administration Rolls Back Birth Control Mandate

Robert Pear, Rebecca R. Ruiz, and Laurie Godstein
The New York Times
Originally published October 6, 2017

The Trump administration on Friday moved to expand the rights of employers to deny women insurance coverage for contraception and issued sweeping guidance on religious freedom that critics said could also erode civil rights protections for lesbian, gay, bisexual and transgender people.

The twin actions, by the Department of Health and Human Services and the Justice Department, were meant to carry out a promise issued by President Trump five months ago, when he declared in the Rose Garden that “we will not allow people of faith to be targeted, bullied or silenced anymore.”

Attorney General Jeff Sessions quoted those words in issuing guidance to federal agencies and prosecutors, instructing them to take the position in court that workers, employers and organizations may claim broad exemptions from nondiscrimination laws on the basis of religious objections.

At the same time, the Department of Health and Human Services issued two rules rolling back a federal requirement that employers must include birth control coverage in their health insurance plans. The rules offer an exemption to any employer that objects to covering contraception services on the basis of sincerely held religious beliefs or moral convictions.

More than 55 million women have access to birth control without co-payments because of the contraceptive coverage mandate, according to a study commissioned by the Obama administration. Under the new regulations, hundreds of thousands of women could lose those benefits.

The article is here.

Italics added.  And, just when the abortion rate was at pre-1973 levels.

Friday, October 6, 2017

AI Research Is in Desperate Need of an Ethical Watchdog

Sophia Chen
Wired Science
Originally published September 18, 2017

About a week ago, Stanford University researchers posted online a study on the latest dystopian AI: They'd made a machine learning algorithm that essentially works as gaydar. After training the algorithm with tens of thousands of photographs from a dating site, the algorithm could, for example, guess if a white man in a photograph was gay with 81 percent accuracy. The researchers’ motives? They wanted to protect gay people. “[Our] findings expose a threat to the privacy and safety of gay men and women,” wrote Michal Kosinski and Yilun Wang in the paper. They built the bomb so they could alert the public about its dangers.

Alas, their good intentions fell on deaf ears. In a joint statement, LGBT advocacy groups Human Rights Campaign and GLAAD condemned the work, writing that the researchers had built a tool based on “junk science” that governments could use to identify and persecute gay people. AI expert Kate Crawford of Microsoft Research called it “AI phrenology” on Twitter. The American Psychological Association, whose journal was readying their work for publication, now says the study is under “ethical review.” Kosinski has received e-mail death threats.

But the controversy illuminates a problem in AI bigger than any single algorithm. More social scientists are using AI intending to solve society’s ills, but they don’t have clear ethical guidelines to prevent them from accidentally harming people, says ethicist Jake Metcalf of Data & Society. “There aren’t consistent standards or transparent review practices,” he says. The guidelines governing social experiments are outdated and often irrelevant—meaning researchers have to make ad hoc rules as they go.

Right now, if government-funded scientists want to research humans for a study, the law requires them to get the approval of an ethics committee known as an institutional review board, or IRB. Stanford’s review board approved Kosinski and Wang’s study. But these boards use rules developed 40 years ago for protecting people during real-life interactions, such as drawing blood or conducting interviews. “The regulations were designed for a very specific type of research harm and a specific set of research methods that simply don’t hold for data science,” says Metcalf.

The article is here.

Lawsuit Over a Suicide Points to a Risk of Antidepressants

Roni Caryn Rabin
The New York Times
Originally published September 11, 2017

Here is an excerpt:

The case is a rare instance in which a lawsuit over a suicide involving antidepressants actually went to trial; many such cases are either dismissed or settled out of court, said Brent Wisner, of the law firm Baum Hedlund Aristei Goldman, which represented Ms. Dolin.

The verdict is also unusual because Glaxo, which has asked the court to overturn the verdict or to grant a new trial, no longer sells Paxil in the United States and did not manufacture the generic form of the medication Mr. Dolin was taking. The company argues that it should not be held liable for a pill it did not make.

Concerns about safety have long dogged antidepressants, though many doctors and patients consider the medications lifesavers.

Ever since they were linked to an increase in suicidal behaviors in young people more than a decade ago, all antidepressants, including Paxil, have carried a “black box” warning label, reviewed and approved by the Food and Drug Administration, saying that they increase the risk of suicidal thinking and behavior in children, teens and young adults under age 25.

The warning labels also stipulate that the suicide risk has not been seen in short-term studies in anyone over age 24, but urges close monitoring of all patients initiating drug treatment.

The article is here.

Thursday, October 5, 2017

Leadership Takes Self-Control. Here’s What We Know About It

Kai Chi (Sam) Yam, Huiwen Lian, D. Lance Ferris, Douglas Brown
Harvard Business Review
Originally published June 5, 2017

Here is an excerpt:

Our review identified a few consequences that are consistently linked to having lower self-control at work:
  1. Increased unethical/deviant behavior: Studies have found that when self-control resources are low, nurses are more likely to be rude to patients, tax accountants are more likely to engage in fraud, and employees in general engage in various forms of unethical behavior, such as lying to their supervisors, stealing office supplies, and so on.
  2. Decreased prosocial behavior: Depleted self-control makes employees less likely to speak up if they see problems at work, less likely to help fellow employees, and less likely to engage in corporate volunteerism.
  3. Reduced job performance: Lower self-control can lead employees to spend less time on difficult tasks, exert less effort at work, be more distracted (e.g., surfing the internet in working time), and generally perform worse than they would had their self-control been normal.
  4. Negative leadership styles: Perhaps what’s most concerning is that leaders with lower self-control often exhibit counter-productive leadership styles. They are more likely to verbally abuse their followers (rather than using positive means to motivate them), more likely to build weak relationships with their followers, and they are less charismatic. Scholars have estimated that the cost to corporations in the United States for such a negative and abusive behavior is at $23.8 billion annually.
Our review makes clear that helping employees maintain self-control is an important task if organizations want to be more effective and ethical. Fortunately, we identified three key factors that can help leaders foster self-control among employees and mitigate the negative effects of losing self-control.

The article is here.

Biased Algorithms Are Everywhere, and No One Seems to Care

Will Knight
MIT News
Originally published July 12, 2017

Here is an excerpt:

Algorithmic bias is shaping up to be a major societal issue at a critical moment in the evolution of machine learning and AI. If the bias lurking inside the algorithms that make ever-more-important decisions goes unrecognized and unchecked, it could have serious negative consequences, especially for poorer communities and minorities. The eventual outcry might also stymie the progress of an incredibly useful technology (see “Inspecting Algorithms for Bias”).

Algorithms that may conceal hidden biases are already routinely used to make vital financial and legal decisions. Proprietary algorithms are used to decide, for instance, who gets a job interview, who gets granted parole, and who gets a loan.

The founders of the new AI Now Initiative, Kate Crawford, a researcher at Microsoft, and Meredith Whittaker, a researcher at Google, say bias may exist in all sorts of services and products.

“It’s still early days for understanding algorithmic bias,” Crawford and Whittaker said in an e-mail. “Just this year we’ve seen more systems that have issues, and these are just the ones that have been investigated.”

Examples of algorithmic bias that have come to light lately, they say, include flawed and misrepresentative systems used to rank teachers, and gender-biased models for natural language processing.

The article is here.

Wednesday, October 4, 2017

Better Minds, Better Morals: A Procedural Guide to Better Judgment

G. Owen Schaefer and Julian Savulescu
Journal of Posthuman Studies
Vol. 1, No. 1, Journal of Posthuman Studies (2017), pp. 26-43

Abstract:

Making more moral decisions – an uncontroversial goal, if ever there was one. But how to go about it? In this article, we offer a practical guide on ways to promote good judgment in our personal and professional lives. We will do this not by outlining what the good life consists in or which values we should accept. Rather, we offer a theory of  procedural reliability : a set of dimensions of thought that are generally conducive to good moral reasoning. At the end of the day, we all have to decide for ourselves what is good and bad, right and wrong. The best way to ensure we make the right choices is to ensure the procedures we’re employing are sound and reliable. We identify four broad categories of judgment to be targeted – cognitive, self-management, motivational and interpersonal. Specific factors within each category are further delineated, with a total of 14 factors to be discussed. For each, we will go through the reasons it generally leads to more morally reliable decision-making, how various thinkers have historically addressed the topic, and the insights of recent research that can offer new ways to promote good reasoning. The result is a wide-ranging survey that contains practical advice on how to make better choices. Finally, we relate this to the project of transhumanism and prudential decision-making. We argue that transhumans will employ better moral procedures like these. We also argue that the same virtues will enable us to take better control of our own lives, enhancing our responsibility and enabling us to lead better lives from the prudential perspective.

A copy of the article is here.

Google Sets Limits on Addiction Treatment Ads, Citing Safety

Michael Corkery
The New York Times
Originally published September 14, 2017

As drug addiction soars in the United States, a booming business of rehab centers has sprung up to treat the problem. And when drug addicts and their families search for help, they often turn to Google.

But prosecutors and health advocates have warned that many online searches are leading addicts to click on ads for rehab centers that are unfit to help them or, in some cases, endangering their lives.

This week, Google acknowledged the problem — and started restricting ads that come up when someone searches for addiction treatment on its site. “We found a number of misleading experiences among rehabilitation treatment centers that led to our decision,” Google spokeswoman Elisa Greene said in a statement on Thursday.

Google has taken similar steps to restrict advertisements only a few times before. Last year it limited ads for payday lenders, and in the past it created a verification system for locksmiths to prevent fraud.

In this case, the restrictions will limit a popular marketing tool in the $35 billion addiction treatment business, affecting thousands of small-time operators.

The article is here.

Tuesday, October 3, 2017

VA About To Scrap Ethics Law That Helps Safeguards Veterans From Predatory For-Profit Colleges

Adam Linehan
Task and Purpose
Originally posted October 2, 2017

An ethics law that prohibits Department of Veterans Affairs employees from receiving money or owning a stake in for-profit colleges that rake in millions in G.I. Bill tuition has “illogical and unintended consequences,” according to VA, which is pushing to suspend the 50-year-old statute.

But veteran advocacy groups say suspending the law would make it easier for the for-profit education industry to exploit its biggest cash cow: veterans. 

In a proposal published in the Federal Register on Sept. 14, VA claims that the statute — which, according to The New York Times, was enacted following a string of scandals involving the for-profit education industry — is redundant due to the other conflict-of-interest laws that apply to all federal employees and provide sufficient safeguards.

Critics of the proposal, however, say that the statute provides additional regulations that protect against abuse and provide more transparency. 

“The statute is one of many important bipartisan reforms Congress implemented to protect G.I. Bill benefits from waste, fraud, and abuse,” William Hubbard, Student Veterans of America’s vice president of government affairs, said in an email to Task & Purpose. “A thoughtful and robust public conservation should be had to ensure that the interests of student veterans is the top of the priority list.”

The article is here.

Editor's Note: The swamp continues to grow under the current administration.

Facts Don’t Change People’s Minds. Here’s What Does

Ozan Varol
Helio
Originally posted September 6, 2017

Here is an excerpt:

The mind doesn’t follow the facts. Facts, as John Adams put it, are stubborn things, but our minds are even more stubborn. Doubt isn’t always resolved in the face of facts for even the most enlightened among us, however credible and convincing those facts might be.

As a result of the well-documented confirmation bias, we tend to undervalue evidence that contradicts our beliefs and overvalue evidence that confirms them. We filter out inconvenient truths and arguments on the opposing side. As a result, our opinions solidify, and it becomes increasingly harder to disrupt established patterns of thinking.

We believe in alternative facts if they support our pre-existing beliefs. Aggressively mediocre corporate executives remain in office because we interpret the evidence to confirm the accuracy of our initial hiring decision. Doctors continue to preach the ills of dietary fat despite emerging research to the contrary.

If you have any doubts about the power of the confirmation bias, think back to the last time you Googled a question. Did you meticulously read each link to get a broad objective picture? Or did you simply skim through the links looking for the page that confirms what you already believed was true? And let’s face it, you’ll always find that page, especially if you’re willing to click through to Page 12 on the Google search results.

The article is here.