Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Monday, March 26, 2018

Bill to Bar LGBTQ Discrimination Stokes New Nebraska Debate

Tess Williams
US News and World Report
Originally published February 22, 2018

A bill that would prevent psychologists from discriminating against patients based on their sexual orientation or gender identity is reviving a nearly decade-old dispute in Nebraska state government.

Sen. Patty Pansing Brooks of Lincoln said Thursday that her bill would adopt the code of conduct from the American Psychiatric Association, which prevents discrimination of protected classes of people, but does not require professionals to treat patients if they lack expertise or it conflicts with their personal beliefs. The professional would have to provide an adequate referral instead.

Pansing Brooks said the bill will likely not become law, but she hopes it will bring attention to the ongoing problem. She said she hopes it will be resolved internally, but if a conclusion is not reached, she plans to call for a hearing later this year and will "not let this issue die."

The state Board of Psychology proposed new regulations in 2008, and the following year, the Department of Health and Human Services sent the changes to the Nebraska Catholic Conference for review. Pansing Brooks said she is unsure why the religious organization was given special review.

The article is here.

Non cogito, ergo sum

Ian Leslie
The Economist
Originally published May/June 2012

Here is an excerpt:

Researchers from Columbia Business School, New York, conducted an experiment in which people were asked to predict outcomes across a range of fields, from politics to the weather to the winner of “American Idol”. They found that those who placed high trust in their feelings made better predictions than those who didn’t. The result only applied, however, when the participants had some prior knowledge.

This last point is vital. Unthinking is not the same as ignorance; you can’t unthink if you haven’t already thought. Djokovic was able to pull off his wonder shot because he had played a thousand variations on it in previous matches and practice; Dylan’s lyrical outpourings drew on his immersion in folk songs, French poetry and American legends. The unconscious minds of great artists and sportsmen are like dense rainforests, which send up spores of inspiration.

The higher the stakes, the more overthinking is a problem. Ed Smith, a cricketer and author of “Luck”, uses the analogy of walking along a kerbstone: easy enough, but what if there was a hundred-foot drop to the street—every step would be a trial. In high-performance fields it’s the older and more successful performers who are most prone to choke, because expectation is piled upon them. An opera singer launching into an aria at La Scala cannot afford to think how her technique might be improved. When Federer plays a match point these days, he may feel as if he’s standing on the cliff edge of his reputation.

The article is here.

Sunday, March 25, 2018

Did Iraq Ever Become A Just War?

Matt Peterson
The Atlantic
Originally posted March 24, 2018

Here is an excerpt:

There’s a broader sense of moral confusion about the conduct of America’s wars. In Iraq, what started as a war of choice came to resemble much more a war of necessity. Can a war that started unjustly ever become righteous? Or does the stain permanently taint anything that comes after it?

The answers to these questions come from the school of philosophy called “just war” theory, which tries to explain whether and when war is permissible, and under what circumstances. It offers two big ways to think about the justice of war. One is whether it’s appropriate to go to war in the first place. Take North Korea, for example. Is there a cause worth killing thousands—millions—of North and South Korean civilians over? Invoking “national security” isn’t enough to make a war just. Kim Jong Un’s nuclear weapons pose an obvious threat to South Korea, Japan, and the United States. But that alone doesn’t make war an acceptable choice, given the lives at stake. The ethics of war require the public to assess how certain it is that innocents will be killed if the military doesn’t act (Will Kim really use his nukes offensively?), whether there’s any way to remove the threat without violence (Has diplomacy been exhausted?), and whether the scale of the deaths that would come from intervention is truly in line with the danger war is meant to avert (If the peninsula has to be burned down to be saved, is it really worth it?)—among other considerations.

The other questions to ask are about the nature of the combat. Are soldiers taking care to target only North Korea’s military? Once the decision has been made that Kim’s nuclear weapons pose an imminent threat, hypothetically, that still wouldn’t make it acceptable to firebomb Pyongyang to turn the population against him. Similarly, American forces could not, say, blow up a bus full of children just because one of Kim’s generals was trying to escape on it.

The article is here.

Deadly gene mutations removed from human embryos in landmark study

Ian Sample
The Guardian
Originally published August 2, 2017

Scientists have modified human embryos to remove genetic mutations that cause heart failure in otherwise healthy young people in a landmark demonstration of the controversial procedure.

It is the first time that human embryos have had their genomes edited outside China, where researchers have performed a handful of small studies to see whether the approach could prevent inherited diseases from being passed on from one generation to the next.

While none of the research so far has created babies from modified embryos, a move that would be illegal in many countries, the work represents a milestone in scientists’ efforts to master the technique and brings the prospect of human clinical trials one step closer.

The work focused on an inherited form of heart disease, but scientists believe the same approach could work for other conditions caused by single gene mutations, such as cystic fibrosis and certain kinds of breast cancer.

The article is here.

Saturday, March 24, 2018

Facebook employs psychologist whose firm sold data to Cambridge Analytica

Paul Lewis and Julia Carrie Wong
The Guardian
Originally published March 18, 2018

Here are two excerpts:

The co-director of a company that harvested data from tens of millions of Facebook users before selling it to the controversial data analytics firms Cambridge Analytica is currently working for the tech giant as an in-house psychologist.

Joseph Chancellor was one of two founding directors of Global Science Research (GSR), the company that harvested Facebook data using a personality app under the guise of academic research and later shared the data with Cambridge Analytica.

He was hired to work at Facebook as a quantitative social psychologist around November 2015, roughly two months after leaving GSR, which had by then acquired data on millions of Facebook users.

Chancellor is still working as a researcher at Facebook’s Menlo Park headquarters in California, where psychologists frequently conduct research and experiments using the company’s vast trove of data on more than 2 billion users.

(cut)

In the months that followed the creation of GSR, the company worked in collaboration with Cambridge Analytica to pay hundreds of thousands of users to take the test as part of an agreement in which they agreed for their data to be collected for academic use.

However, the app also collected the information of the test-takers’ Facebook friends, leading to the accumulation of a data pool tens of millions strong.

That data sold to Cambridge Analytica as part of a commercial agreement.

Facebook’s “platform policy” allowed only collection of friends’ data to improve user experience in the app and barred it being sold on or used for advertising.

The information is here.

Breakthrough as scientists grow sheep embryos containing human cells

Nicola Davis
The Guardian
Originally published February 17, 2018

Growing human organs inside other animals has taken another step away from science-fiction, with researchers announcing they have grown sheep embryos containing human cells.

Scientists say growing human organs inside animals could not only increase supply, but also offer the possibility of genetically tailoring the organs to be compatible with the immune system of the patient receiving them, by using the patient’s own cells in the procedure, removing the possibility of rejection.

According to NHS Blood and Transplant, almost 460 people died in 2016 waiting for organs, while those who do receive transplants sometimes see organs rejected.

“Even today the best matched organs, except if they come from identical twins, don’t last very long because with time the immune system continuously is attacking them,” said Dr Pablo Ross from the University of California, Davis, who is part of the team working towards growing human organs in other species.

Ross added that if it does become possible to grow human organs inside other species, it might be that organ transplants become a possibility beyond critical conditions.

The information is here.

Friday, March 23, 2018

Mark Zuckerberg Has No Way Out of Facebook's Quagmire

Leonid Bershidsky
Bloomberg News
Originally posted March 21, 2018

Here is an excerpt:

"Making sure time spent on Facebook is time well spent," as Zuckerberg puts it, should lead to the collection of better-quality data. If nobody is setting up fake accounts to spread disinformation, users are more likely to be their normal selves. Anyone analyzing these healthier interactions will likely have more success in targeting commercial and, yes, political offerings to real people. This would inevitably be a smaller yet still profitable enterprise, and no longer a growing one, at least in the short term. But the Cambridge Analytica scandal shows people may not be okay with Facebook's data gathering, improved or not.

The scandal follows the revelation (to most Facebook users who read about it) that, until 2015, application developers on the social network's platform were able to get information about a user's Facebook friends after asking permission in the most perfunctory way. The 2012 Obama campaign used this functionality. So -- though in a more underhanded way -- did Cambridge Analytica, which may or may not have used the data to help elect President Donald Trump.

Many people are angry at Facebook for not acting more resolutely to prevent CA's abuse, but if that were the whole problem, it would have been enough for Zuckerberg to apologize and point out that the offending functionality hasn't been available for several years. The #deletefacebook campaign -- now backed by WhatsApp co-founder Brian Acton, whom Facebook made a billionaire -- is, however, powered by a bigger problem than that. People are worried about the data Facebook is accumulating about them and about how these data are used. Facebook itself works with political campaigns to help them target messages; it did so for the Trump campaign, too, perhaps helping it more than CA did.

The article is here.

First Question: Should you stop using Facebook because they violated your trust?

Second Question: Is Facebook a defective product?

Facebook Woes: Data Breach, Securities Fraud, or Something Else?

Matt Levine
Bloomberg.com
Originally posted March 21, 2018

Here is an excerpt:

But the result is always "securities fraud," whatever the nature of the underlying input. An undisclosed data breach is securities fraud, but an undisclosed sexual-harassment problem or chicken-mispricing conspiracy will get you to the same place. There is an important practical benefit to a legal regime that works like this: It makes it easy to punish bad behavior, at least by public companies, because every sort of bad behavior is also securities fraud. You don't have to prove that the underlying chicken-mispricing conspiracy was illegal, or that the data breach was due to bad security procedures. All you have to prove is that it happened, and it wasn't disclosed, and the stock went down when it was. The evaluation of the badness is in a sense outsourced to the market: We know that the behavior was illegal, not because there was a clear law against it, but because the stock went down. Securities law is an all-purpose tool for punishing corporate badness, a one-size-fits-all approach that makes all badness commensurable using the metric of stock price. It has a certain efficiency.

On the other hand it sometimes makes me a little uneasy that so much of our law ends up working this way. "In a world of dysfunctional government and pervasive financial capitalism," I once wrote, "more and more of our politics is contested in the form of securities regulation." And: "Our government's duty to its citizens is mediated by their ownership of our public companies." When you punish bad stuff because it is bad for shareholders, you are making a certain judgment about what sort of stuff is bad and who is entitled to be protected from it.

Anyway Facebook Inc. wants to make it very clear that it did not suffer a data breach. When a researcher got data about millions of Facebook users without those users' explicit permission, and when the researcher turned that data over to Cambridge Analytica for political targeting in violation of Facebook's terms, none of that was a data breach. Facebook wasn't hacked. What happened was somewhere between a contractual violation and ... you know ... just how Facebook works? There is some splitting of hairs over this, and you can understand why -- consider that SEC guidance about when companies have to disclose data breaches -- but in another sense it just doesn't matter. You don't need to know whether the thing was a "data breach" to know how bad it was. You can just look at the stock price. The stock went down...

The article is here.

Thursday, March 22, 2018

The Ethical Design of Intelligent Robots

Sunidhi Ramesh
The Neuroethics Blog
Originally published February 27, 2018

Here is an excerpt:

In a 2016 study, a team of Georgia Tech scholars formulated a simulation in which 26 volunteers interacted “with a robot in a non-emergency task to experience its behavior and then [chose] whether [or not] to follow the robot’s instructions in an emergency.” To the researchers’ surprise (and unease), in this “emergency” situation (complete with artificial smoke and fire alarms), “all [of the] participants followed the robot in the emergency, despite half observing the same robot perform poorly [making errors by spinning, etc.] in a navigation guidance task just minutes before… even when the robot pointed to a dark room with no discernible exit, the majority of people did not choose to safely exit the way they entered.” It seems that we not only trust robots, but we also do so almost blindly.

The investigators proceeded to label this tendency as a concerning and alarming display of overtrust of robots—an overtrust that applied even to robots that showed indications of not being trustworthy.

Not convinced? Let’s consider the recent Tesla self-driving car crashes. How, you may ask, could a self-driving car barrel into parked vehicles when the driver is still able to override the autopilot machinery and manually stop the vehicle in seemingly dangerous situations? Yet, these accidents have happened. Numerous times.

The answer may, again, lie in overtrust. “My Tesla knows when to stop,” such a driver may think. Yet, as the car lurches uncomfortably into a position that would push the rest of us to slam onto our breaks, a driver in a self-driving car (and an unknowing victim of this overtrust) still has faith in the technology.

“My Tesla knows when to stop.” Until it doesn’t. And it’s too late.