Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, philosophy and health care

Tuesday, December 12, 2017

Regulation of AI: Not If But When and How

Ben Loewenstein
RSA.org
Originally published November 21, 2017

Here is an excerpt:

Firstly, AI is already embedded in today’s world, albeit in infant form. Fully autonomous vehicles are not for sale yet but self-parking cars have been in the market for years. We already rely on biometric technology like facial recognition to grant us entry into a country and robots are giving us banking advice.

Secondly, there is broad consensus that controls are needed. For example, a report issued last December by the office of former US President Barack Obama concluded that “aggressive policy action” would be required in the event of large job losses due to automation to ensure it delivers prosperity. If the American Government is no longer a credible source of accurate information for you, take the word of heavyweights like Bill Gates and Elon Musk, both of whom have called for AI to be regulated.

Finally, the building blocks of AI regulation are already looming in the form of rules like the European Union’s General Data Protection Regulation, which will take effect next year. The UK government’s independent review’s recommendations are also likely to become government policy. This means that we could see a regime established where firms within the same sector share data with each other under prescribed governance structures in an effort to curb the monopolies big tech companies currently enjoy on consumer information.

The latter characterises the threat facing the AI industry: the prospect of lawmakers making bold decisions that alter the trajectory of innovation. This is not an exaggeration.

The article is here.

Can AI Be Taught to Explain Itself?

Cliff Kuang
The New York Times Magazine
Originally published November 21, 2017

Here are two excerpts:

In 2018, the European Union will begin enforcing a law requiring that any decision made by a machine be readily explainable, on penalty of fines that could cost companies like Google and Facebook billions of dollars. The law was written to be powerful and broad and fails to define what constitutes a satisfying explanation or how exactly those explanations are to be reached. It represents a rare case in which a law has managed to leap into a future that academics and tech companies are just beginning to devote concentrated effort to understanding. As researchers at Oxford dryly noted, the law “could require a complete overhaul of standard and widely used algorithmic techniques” — techniques already permeating our everyday lives.

(cut)

“Artificial intelligence” is a misnomer, an airy and evocative term that can be shaded with whatever notions we might have about what “intelligence” is in the first place. Researchers today prefer the term “machine learning,” which better describes what makes such algorithms powerful. Let’s say that a computer program is deciding whether to give you a loan. It might start by comparing the loan amount with your income; then it might look at your credit history, marital status or age; then it might consider any number of other data points. After exhausting this “decision tree” of possible variables, the computer will spit out a decision. If the program were built with only a few examples to reason from, it probably wouldn’t be very accurate. But given millions of cases to consider, along with their various outcomes, a machine-learning algorithm could tweak itself — figuring out when to, say, give more weight to age and less to income — until it is able to handle a range of novel situations and reliably predict how likely each loan is to default.

The article is here.

Monday, December 11, 2017

Epistemic rationality: Skepticism toward unfounded beliefs requires sufficient cognitive ability and motivation to be rational

TomasStåhl and Jan-Willem van Prooijen
Personality and Individual Differences
Volume 122, 1 February 2018, Pages 155-163

Abstract

Why does belief in the paranormal, conspiracy theories, and various other phenomena that are not backed up by evidence remain widespread in modern society? In the present research we adopt an individual difference approach, as we seek to identify psychological precursors of skepticism toward unfounded beliefs. We propose that part of the reason why unfounded beliefs are so widespread is because skepticism requires both sufficient analytic skills, and the motivation to form beliefs on rational grounds. In Study 1 we show that analytic thinking is associated with a lower inclination to believe various conspiracy theories, and paranormal phenomena, but only among individuals who strongly value epistemic rationality. We replicate this effect on paranormal belief, but not conspiracy beliefs, in Study 2. We also provide evidence suggesting that general cognitive ability, rather than analytic cognitive style, is the underlying facet of analytic thinking that is responsible for these effects.

The article is here.

To think critically, you have to be both analytical and motivated

John Timmer
ARS Techica
Originally published November 15, 2017

Here is an excerpt:

One of the proposed solutions to this issue is to incorporate more critical thinking into our education system. But critical thinking is more than just a skill set; you have to recognize when to apply it, do so effectively, and then know how to respond to the results. Understanding what makes a person effective at analyzing fake news and conspiracy theories has to take all of this into account. A small step toward that understanding comes from a recently released paper, which looks at how analytical thinking and motivated skepticism interact to make someone an effective critical thinker.

Valuing rationality

The work comes courtesy of the University of Illinois at Chicago's Tomas Ståhl and Jan-Willem van Prooijen at VU Amsterdam. This isn't the first time we've heard from Ståhl; last year, he published a paper on what he termed "moralizing epistemic rationality." In it, he looked at people's thoughts on the place critical thinking should occupy in their lives. The research identified two classes of individuals: those who valued their own engagement with critical thinking, and those who viewed it as a moral imperative that everyone engage in this sort of analysis.

The information is here.

The target article is here.

Sunday, December 10, 2017

The Vanishing "Values Voter"

McKay Coppins
The Atlantic
Originally posted December 7, 2017

Here is an excerpt:

For decades, the belief that private morality was essential to assessing the worthiness of politicians and public figures was an animating ideal at the core of the Christian right’s credo. As with most ideals, the movement did not always live up to its own standards. So-called “values voters” pursued a polarizing, multi-faceted agenda that was often tangled up in prejudice and partisanship. They fiercely defended Clarence Thomas when he was accused of sexually harassing Anita Hill, for example, and then excoriated Bill Clinton for his affair with Monica Lewinsky.

But even when they were failing to hold their own side accountable, they still clung to the idea that “character counts.” As recently as 2011, a poll by the Public Religion Research Institute found that only 30 percent of white evangelicals believed “an elected official who commits an immoral act in their personal life can still behave ethically and fulfill their duties in their public and professional life.” But by the time Donald Trump was running for president in 2016, that number had risen sharply to 72 percent. White evangelicals are now more tolerant of immoral behavior by elected officials than the average American. “This is really a sea change in evangelical ethics,” Robert P. Jones, the head of the institute and the author of The End of White Christian America, recently told me.

(cut)

“The way evangelicals see the world, the culture is not only slipping away—it’s slipping away in all caps, with four exclamation points after that. It’s going to you-know-what in a handbasket,” Brody told me. “Where does that leave evangelicals? It leaves them with a choice. Do they sacrifice a little bit of that ethical guideline they’ve used in the past in exchange for what they believe is saving the culture?”

The article is here.

These are the Therapist Behaviors that are Helpful or Harmful

Christian Jarrett
Research Digest
Originally published November 23, 2017

Here is an excerpt:

The most helpful therapy moments involved specific treatment techniques, such as times the therapist gave the client a concrete strategy they could use in everyday life; instances when the therapist made connections for the client (such as identifying events that affected their depression symptoms); or helped them process their emotions. Other helpful moments involved fundamental therapist skills, such as listening and expressing empathy, offering support or praise, or when the therapist discussed the process of therapy, including what the client wants from it.

The clients said they found these moments helpful because they learned a new skill, felt heard or understood, gained insight and/or were better able to process their emotions.

In terms of hindering therapist behaviours, these often seemed the same, superficially at least, as the helpful behaviours, including instances when the therapist listened, attempted to express empathy, or attempted to structure the session. The difference seemed to be in the execution or timing of these behaviours. The clients said they found these moments unhelpful when they were off-topic (for instance, their therapist listened to them “rambling” on about irrelevant details without intervening); when they felt like they were being judged; or they felt it was too soon for them to confront a particular issue.

The article is here.

Saturday, December 9, 2017

The Root of All Cruelty

Paul Bloom
The New Yorker
Originally published November 20, 2017

Here are two excerpts:

Early psychological research on dehumanization looked at what made the Nazis different from the rest of us. But psychologists now talk about the ubiquity of dehumanization. Nick Haslam, at the University of Melbourne, and Steve Loughnan, at the University of Edinburgh, provide a list of examples, including some painfully mundane ones: “Outraged members of the public call sex offenders animals. Psychopaths treat victims merely as means to their vicious ends. The poor are mocked as libidinous dolts. Passersby look through homeless people as if they were transparent obstacles. Dementia sufferers are represented in the media as shuffling zombies.”

The thesis that viewing others as objects or animals enables our very worst conduct would seem to explain a great deal. Yet there’s reason to think that it’s almost the opposite of the truth.

(cut)

But “Virtuous Violence: Hurting and Killing to Create, Sustain, End, and Honor Social Relationships” (Cambridge), by the anthropologist Alan Fiske and the psychologist Tage Rai, argues that these standard accounts often have it backward. In many instances, violence is neither a cold-blooded solution to a problem nor a failure of inhibition; most of all, it doesn’t entail a blindness to moral considerations. On the contrary, morality is often a motivating force: “People are impelled to violence when they feel that to regulate certain social relationships, imposing suffering or death is necessary, natural, legitimate, desirable, condoned, admired, and ethically gratifying.” Obvious examples include suicide bombings, honor killings, and the torture of prisoners during war, but Fiske and Rai extend the list to gang fights and violence toward intimate partners. For Fiske and Rai, actions like these often reflect the desire to do the right thing, to exact just vengeance, or to teach someone a lesson. There’s a profound continuity between such acts and the punishments that—in the name of requital, deterrence, or discipline—the criminal-justice system lawfully imposes. Moral violence, whether reflected in legal sanctions, the killing of enemy soldiers in war, or punishing someone for an ethical transgression, is motivated by the recognition that its victim is a moral agent, someone fully human.

The article is here.

Evidence-Based Policy Mistakes

Kausik Basu
Project Syndicate
Originally published November 30, 2017

Here is an excerpt:

Likewise, US President Donald Trump cites simplistic trade-deficit figures to justify protectionist policies that win him support among a certain segment of the US population. In reality, the evidence suggests that such policies will hurt the very people Trump claims to be protecting.

Now, the chair of Trump’s Council of Economic Advisers, Kevin Hassett, is attempting to defend Congressional Republicans’ effort to slash corporate taxes by claiming that, when developed countries have done so in the past, workers gained “well north of” $4,000 per year. Yet there is ample evidence that the benefits of such tax cuts accrue disproportionately to the rich, largely via companies buying back stock and shareholders earning higher dividends.

It is not clear whence Hassett is getting his data. But chances are that, at the very least, he is misinterpreting it. And he is far from alone in failing to reach accurate conclusions when assessing a given set of data.

Consider the oft-repeated refrain that, because there is evidence that virtually all jobs over the last decade were created by the private sector, the private sector must be the most effective job creator. At first glance, the logic might seem sound. But, on closer examination, the statement begs the question. Imagine a Soviet economist claiming that, because the government created virtually all jobs in the Soviet Union, the government must be the most effective job creator. To find the truth, one would need, at a minimum, data on who else tried to create jobs, and how.

The article is here.

Friday, December 8, 2017

University could lose millions from “unethical” research backed by Peter Thiel

Beth Mole
ARS Technica
Originally published November 14, 2017

Here is an excerpt:

According to HHS records, SIU (Southern Illinois University) had committed to following all HHS regulations—including safety requirements and having IRB approval and oversight—for all clinical trials, regardless of who funded the trials. If SIU fails to do so, it could jeopardize the $15 million in federal grant money the university receives for its other research.

Earlier, an SIU spokesperson had claimed that SIU didn’t need to follow HHS regulations in this case because Halford was acting as an independent researcher with Rational Vaccines. Thus, SIU had no legal responsibility to ensure proper safety protocols and wasn’t risking its federal funding.

In her e-mail, Buchanan asked for the “results of SIU’s evaluation of its jurisdiction over this research.”

In his response, Kruse noted that SIU was not aware of the St. Kitts trial until October 2016, two months after the trial was completed. But, he wrote, the university had opened an investigation into Halford’s work following his death in June of this year. The decision to investigate was also based on disclosures from American filmmaker Agustín Fernández III, who co-founded Rational Vaccines with Halford, Kruse noted.

The article is here.

Autonomous future could question legal ethics

Becky Raspe
Cleveland Jewish News
Originally published November 21, 2017

Here is an excerpt:

Northman said he finds the ethical implications of an autonomous future interesting, but completely contradictory to what he learned in law school in the 1990s.

“People were expected to be responsible for their activities,” he said. “And as long as it was within their means to stop something or more tellingly anticipate a problem before it occurs, they have an obligation to do so. When you blend software over the top of that this level of autonomy, we are left with some difficult boundaries to try and assess where a driver’s responsibility starts or the software programmers continues on.”

When considering the ethics surrounding autonomous living, Paris referenced the “trolley problem.” The trolley problem goes as this: there is an automated vehicle operating on an open road, and ahead there are five people in the road and one person off to the side. The question here, Paris said, is should the vehicle consider traveling on and hitting the five people or will it swerve and hit just the one?

“When humans are driving vehicles, they are the moral decision makers that make those choices behind the wheel,” she said. “Can engineers program automated vehicles to replace that moral thought with an algorithm? Will they prioritize the five lives or that one person? There are a lot of questions and not too many solutions at this point. With these ethical dilemmas, you have to be careful about what is being implemented.”

The article is here.

Thursday, December 7, 2017

Social media threat: People learned to survive disease, we can handle Twitter

Glenn Harlan Reynolds
USA Today
Originally posted November 20, 2017

Here is an excerpt:

Hunters and gatherers were at far less risk for infectious disease because they didn’t encounter very many new people very often. Their exposure was low, and contact among such bands was sporadic enough that diseases couldn’t spread very fast.

It wasn’t until you crowded thousands, or tens of thousands of them, along with their animals, into small dense areas with poor sanitation that disease outbreaks took off.  Instead of meeting dozens of new people per year, an urban dweller probably encountered hundreds per day. Diseases that would have affected only a few people at a time as they spread slowly across a continent (or just burned out for lack of new carriers) would now leap from person to person in a flash.

Likewise, in recent years we’ve gone from an era when ideas spread comparatively slowly, to one in which social media in particular allow them to spread like wildfire. Sometimes that’s good, when they’re good ideas. But most ideas are probably bad; certainly 90% of ideas aren’t in the top 10%. Maybe we don’t know the mental disease vectors that we’re inadvertently unleashing.

It took three things to help control the spread of disease in cities: sanitation, acclimation and better nutrition. In early cities, after all, people had no idea how diseases spread, something we didn’t fully understand until the late 19th century. But rule-of-thumb sanitation made things a lot better over time. Also, populations eventually adapted:  Diseases became endemic, not epidemic, and usually less severe as people developed immunity. And finally, as Scott notes, surviving disease was always a function of nutrition, with better-nourished populations doing much better than malnourished ones.

The article is here.

Attica: It’s Worse Than We Thought

Heather Ann Thompson
The New York Times
Originally posted November 19, 2017

Here is an excerpt:

As the fine print of that 1972 article read: “We are indebted to the inmates of the Attica Correctional Facility who participated in this study and to the warden and his administration for their help and cooperation.” This esteemed physician, a man working for two of New York’s most respected hospitals and receiving generous research funding from the N.I.H., was indeed conducting leprosy experiments at Attica.

But which of Attica’s nearly 2,400 prisoners, I wondered, was the subject of experiments relating to this crippling disease, without, as Dr. Brandriss admitted, adequate consent? Might it have been the 19-year-old who was at Attica because he had sliced the top of a neighbor’s convertible? Or a man imprisoned there for more serious offenses? Either way, no jury had sentenced them to being a guinea pig in any experiment relating to a disease as painful and disfiguring as leprosy.

And what about the hundreds of corrections officers and civilian employees working at Attica? Even if no one in this extremely crowded facility was actually exposed to this dreaded disease, one in which “prolonged close contact” with an infected patient is a most serious risk factor, were these state employees at all informed that medical experiments being conducted on the men in their charge?

This is not the first time prisons have allowed secret medical experiments on those locked inside. A 1998 book on Holmesburg Prison in Pennsylvania revealed that a doctor there, Albert Kligman, had been experimenting on prisoners for years. After the book appeared, nearly 300 former prisoners sued him, the University of Pennsylvania and the manufacturers of the substances to which they had been exposed, but none of the defendants was held accountable.

The article is here.

Wednesday, December 6, 2017

What the heck is machine learning, and why is it everywhere these days?

Luke Dormehl
Digital Trends
Originally published November 18, 2017

Here is an excerpt:

Which programming languages to machine learners use?

Like the question above, there’s no one answer to this. Machine learning is a big field and, with so much ground to cover, there’s no one language that does absolutely everything.

Due to its simplicity, and the availability of deep learning libraries such as TensorFlow and PyTorch, Python is currently the number one language. If you’re thinking about delving into machine learning for the first time, it’s also one of the most accessible languages — and there are loads of online resources available.

Java is a good option, too, and comes with a great community of its own, while C++ and R are also worth checking out.

Is machine learning the perfect solution to all our AI problems?

You can probably guess where we’re going with this. No, machine learning isn’t infallible. Algorithms can still be subject to human biases, and the rule of “garbage in, garbage out” holds as true here as it does to any other data-driven field.

There are also questions about transparency, particularly when you’re dealing with the kind of “black boxes” that are an essential part of neural networks.

But as a tool that’s helping to revolutionize technology as we know it, and making AI available to the masses? You bet that it’s a great tool!

The article is here.

Disturbing allegations against psychologist at VT treatment center

Jennifer Costa
WCAX.com
Originally published November 17, 2017

Here is an excerpt:

Simonds is accused making comments about female patients, calling them "whores" or saying they look "sexy" and asking inappropriate details about their sex lives. Staff members allege he showed young women favoritism, made promises about drug treatment and bypassed waiting lists to get them help ahead of others.

He's accused of yelling and physically intimidating patients. Some refused to file complaints fearing he would pull their treatment opportunities.

Staffers go on to paint a nasty picture of their work environment, telling the state Simonds routinely threatened, cursed and yelled at them, calling them derogatory names like "retarded," "monkeys," "fat and lazy," and threatening to fire them at will while sexually harassing female subordinates.

Co-workers claim Simonds banned them from referring residential patients to facilities closer to their homes, instructed them to alter referrals to keep them in the Maple Leaf system and fired a clinician who refused to follow these orders. He is also accused of telling staff members to lie to the state about staffing to maintain funding and of directing clinicians to keep patients longer than necessary to drum up revenue.

The article is here.

Tuesday, December 5, 2017

Turning Conservatives Into Liberals: Safety First

John Bargh
The Washington Post
Originally published November 22, 2017

Here is an excerpt:

But if they had instead just imagined being completely physically safe, the Republicans became significantly more liberal — their positions on social attitudes were much more like the Democratic respondents. And on the issue of social change in general, the Republicans’ attitudes were now indistinguishable from the Democrats. Imagining being completely safe from physical harm had done what no experiment had done before — it had turned conservatives into liberals.

In both instances, we had manipulated a deeper underlying reason for political attitudes, the strength of the basic motivation of safety and survival. The boiling water of our social and political attitudes, it seems, can be turned up or down by changing how physically safe we feel.

This is why it makes sense that liberal politicians intuitively portray danger as manageable — recall FDR’s famous Great Depression era reassurance of “nothing to fear but fear itself,” echoed decades later in Barack Obama’s final State of the Union address — and why President Trump and other Republican politicians are instead likely to emphasize the dangers of terrorism and immigration, relying on fear as a motivator to gain votes.

In fact, anti-immigration attitudes are also linked directly to the underlying basic drive for physical safety. For centuries, arch-conservative leaders have often referred to scapegoated minority groups as “germs” or “bacteria” that seek to invade and destroy their country from within. President Trump is an acknowledged germaphobe, and he has a penchant for describing people — not only immigrants but political opponents and former Miss Universe contestants — as “disgusting.”

The article is here.

Liberals and conservatives are similarly motivated to avoid exposure to one another's opinions

Jeremy A. Frimer, Linda J. Skitka, Matt Motyl
Journal of Experimental Social Psychology
Volume 72, September 2017, Pages 1-12

Abstract

Ideologically committed people are similarly motivated to avoid ideologically crosscutting information. Although some previous research has found that political conservatives may be more prone to selective exposure than liberals are, we find similar selective exposure motives on the political left and right across a variety of issues. The majority of people on both sides of the same-sex marriage debate willingly gave up a chance to win money to avoid hearing from the other side (Study 1). When thinking back to the 2012 U.S. Presidential election (Study 2), ahead to upcoming elections in the U.S. and Canada (Study 3), and about a range of other Culture War issues (Study 4), liberals and conservatives reported similar aversion toward learning about the views of their ideological opponents. Their lack of interest was not due to already being informed about the other side or attributable election fatigue. Rather, people on both sides indicated that they anticipated that hearing from the other side would induce cognitive dissonance (e.g., require effort, cause frustration) and undermine a sense of shared reality with the person expressing disparate views (e.g., damage the relationship; Study 5). A high-powered meta-analysis of our data sets (N = 2417) did not detect a difference in the intensity of liberals' (d = 0.63) and conservatives' (d = 0.58) desires to remain in their respective ideological bubbles.

The research is here.

Monday, December 4, 2017

Ray Kurzweil on Turing Tests, Brain Extenders, and AI Ethics

Nancy Kaszerman
Wired.com
Originally posted November 13, 2017

Here is an excerpt:

There has been a lot of focus on AI ethics, how to keep the technology safe, and it's kind of a polarized discussion like a lot of discussions nowadays. I've actually talked about both promise and peril for quite a long time. Technology is always going to be a double-edged sword. Fire kept us warm, cooked our food, and burned down our houses. These technologies are much more powerful. It's also a long discussion, but I think we should go through three phases, at least I did, in contemplating this. First is delight at the opportunity to overcome age-old afflictions: poverty, disease, and so on. Then alarm that these technologies can be destructive and cause even existential risks. And finally I think where we need to come out is an appreciation that we have a moral imperative to continue progress in these technologies because, despite the progress we've made—and that's a-whole-nother issue, people think things are getting worse but they're actually getting better—there's still a lot of human suffering to be overcome. It's only continued progress particularly in AI that's going to enable us to continue overcoming poverty and disease and environmental degradation while we attend to the peril.

And there's a good framework for doing that. Forty years ago, there were visionaries who saw both the promise and the peril of biotechnology, basically reprogramming biology away from disease and aging. So they held a conference called the Asilomar Conference at the conference center in Asilomar, and came up with ethical guidelines and strategies—how to keep these technologies safe. Now it's 40 years later. We are getting clinical impact of biotechnology. It's a trickle today, it'll be a flood over the next decade. The number of people who have been harmed either accidentally or intentionally by abuse of biotechnology so far has been zero. It's a good model for how to proceed.

The article is here.

Psychologist felt 'honest, sincere' before $800K healthcare fraud exposed

John Agar
MLive.com
Originally posted November 21, 2017

A psychologist who defrauded insurance companies of $800,000 spent half of the money on vacations, concert tickets and a mobile-recording business, the government said.

George E. Compton Jr., 63, of Sturgis, was sentenced by U.S. District Judge Gordon Quist to 28 months in prison.

Compton, who pleaded guilty to healthcare fraud, said he was "ashamed" of his actions.

"Until this investigation, I did not hesitate to describe myself as an honest, sincere man," he wrote in a letter to the judge. "Seeing myself from a different perspective has been trying to say the least. ... The worst punishment for my admitted crimes will be the exclusion from the very work I love."

The government said he billed insurance companies for counseling sessions he did not provide, from Jan. 1, 2013, until June 30, 2016.

The article is here.

Sunday, December 3, 2017

Lack of Intellectual Humility Plagues Our Times, Say Researchers

Paul Ratner
BigThink.com
Originally posted November 12, 2017

Researchers from Duke University say that intellectual humility is an important personality trait that has become in short supply in our country.

Intellectual humility is like open-mindedness. It is basically an awareness that your beliefs may be wrong, influencing a person’s ability to make decisions in politics, health and other areas of life. An intellectually humble person can have strong opinions, say the authors, but will still recognize they are not perfect and are willing to be proven wrong.

This trait is not linked to a specific partisan view, with researchers finding no difference in levels of the characteristic between conservatives, liberals, religious or non-religious people. In fact, the scientists possibly managed to put to rest an age-old stereotype, explained the study’s lead author Mark Leary, a professor of psychology and neuroscience at Duke.

The article is here.

Saturday, December 2, 2017

Japanese doctor who exposed a drug too good to be true calls for morality and reforms

Tomoko Otake
Japan Times
Originally posted November 15, 2017

Here is an excerpt:

Kuwajima says the Diovan case is a sobering reminder that large-scale clinical trials published in top medical journals should not be blindly trusted, as they can be exploited by drugmakers rushing to sell their products before their patents run out.

“I worked at a research hospital and had opportunities to try new or premarket drugs on patients, so I knew from early on that Diovan and the same class of drugs called ARB wouldn’t work, especially for elderly patients,” Kuwajima recalled in a recent interview at Tokyo Metropolitan Geriatric Hospital, where he has retired from full-time practice but still sees patients two days a week. “I had a strong sense of crisis that hordes of elderly people — whose ranks were growing as the population grayed — would be prescribed a drug that didn’t work.”

Kuwajima said he immediately found the Diovan research suspicious because the results were just too good to be true. This was before Novartis admitted that it had paid five professors conducting studies at their universities a total of ¥1.1 billion in “research grants,” and even had Shirahashi, a Novartis employee purporting to be a university lecturer, help with statistical analyses for the papers.

The article is here.

Friday, December 1, 2017

The Essence of the Individual: The Pervasive Belief in the True Self Is an Instance of Psychological Essentialism

Andrew G. Christy, Rebecca J. Schlegel, and Andrei Cimpian
Preprint

Abstract

Eight studies (N = 2,974) were conducted to test the hypothesis that the widespread folk belief in the true self is an instance of psychological essentialism. Results supported this hypothesis. Specifically, participants’ reasoning about the true self displayed the telltale features of essentialist reasoning (immutability, discreteness, consistency, informativeness, inherence, and biological basis; Studies 1–4); participants’ endorsement of true-self beliefs correlated with individual differences in other essentialist beliefs (Study 5); and experimental manipulations of essentialist thought in domains other than the self were found to “spill over” and affect the extent to which participants endorsed true-self beliefs (Studies 6–8). These findings advance theory on the origins and functions of true-self beliefs, revealing these beliefs to be a specific instance of a broader tendency to explain phenomena in the world in terms of underlying essences.

The preprint is here.

Selling Bad Therapy to Trauma Victims

Jonathan Shedler
Psychology Today
Originally published November 19, 2017

Here is the conclusion:

First, do no harm

Many health insurance companies discriminate against psychotherapy. Congress has passed laws mandating mental health “parity” (equal coverage for medical and mental health conditions) but health insurers circumvent them. This has led to class action lawsuits against health insurance companies, but discrimination continues.

One way that health insurers circumvent parity laws is by shunting patients to the briefest and cheapest therapies — just the kind of therapies recommended by the APA’s treatment guidelines. Another way is by making therapy so impersonal and dehumanizing that patients drop out. Health insurers do not publicly say the treatment decisions are driven by economic self-interest. They say the treatments are scientifically proven — and point to treatment guidelines like those just issued by the APA.

It’s bad enough that most Americans don’t have adequate mental health coverage, without also being gaslighted and told that inadequate therapy is the best therapy.

The APA’s ethics code begins, “Psychologists strive to benefit those with whom they work and take care to do no harm.” APA has an honorable history of fighting for patients’ access to good care and against health insurance company abuses.

Blinded by RCT ideology, APA inadvertently handed a trump card to the worst apples in the health insurance industry.

The article is here.

Thursday, November 30, 2017

Artificial Intelligence & Mental Health

Smriti Joshi
Chatbot News Daily
Originally posted

Here is an excerpt:

There are many barriers to getting quality mental healthcare, from searching for a provider who practices in a user's geographical location to screening multiple potential therapists in order to find someone you feel comfortable speaking with. The stigma associated with seeking mental health treatment often leaves people silently suffering from a psychological issue. These barriers stop many people from finding help and AI is being looked at a potential tool to bridge this gap between service providers and service users.

Imagine how many people would be benefitted if artificial intelligence could bring quality and affordable mental health support to anyone with an internet connection. A psychiatrist or psychologist examines a person’s tone, word choice, and the length of a phrase etc and these are all crucial cues to understanding what’s going on in someone’s mind. Machine learning is now being applied by researchers to diagnose people with mental disorders. Harvard University and University of Vermont researchers are working on integrating machine learning tools and Instagram to improve depression screening. Using color analysis, metadata, and algorithmic face detection, they were able to reach 70 percent accuracy in detecting signs of depression. The research wing at IBM is using transcripts and audio from psychiatric interviews, coupled with machine learning techniques, to find patterns in speech to help clinicians accurately predict and monitor psychosis, schizophrenia, mania, and depression. A research, led by John Pestian, a professor at Cincinnati Children’s Hospital Medical Centre showed that machine learning is up to 93 percent accurate in identifying a suicidal person.

The post is here.

Why We Should Be Concerned About Artificial Superintelligence

Matthew Graves
Skeptic Magazine
Originally published November 2017

Here is an excerpt:

Our intelligence is ultimately a mechanistic process that happens in the brain, but there is no reason to assume that human intelligence is the only possible form of intelligence. And while the brain is complex, this is partly an artifact of the blind, incremental progress that shaped it—natural selection. This suggests that developing machine intelligence may turn out to be a simpler task than reverse- engineering the entire brain. The brain sets an upper bound on the difficulty of building machine intelligence; work to date in the field of artificial intelligence sets a lower bound; and within that range, it’s highly uncertain exactly how difficult the problem is. We could be 15 years away from the conceptual breakthroughs required, or 50 years away, or more.

The fact that artificial intelligence may be very different from human intelligence also suggests that we should be very careful about anthropomorphizing AI. Depending on the design choices AI scientists make, future AI systems may not share our goals or motivations; they may have very different concepts and intuitions; or terms like “goal” and “intuition” may not even be particularly applicable to the way AI systems think and act. AI systems may also have blind spots regarding questions that strike us as obvious. AI systems might also end up far more intelligent than any human.

The last possibility deserves special attention, since superintelligent AI has far more practical significance than other kinds of AI.

AI researchers generally agree that superintelligent AI is possible, though they have different views on how and when it’s likely to be developed. In a 2013 survey, top-cited experts in artificial intelligence assigned a median 50% probability to AI being able to “carry out most human professions at least as well as a typical human” by the year 2050, and also assigned a 50% probability to AI greatly surpassing the performance of every human in most professions within 30 years of reaching that threshold.

The article is here.

Wednesday, November 29, 2017

The Hype of Virtual Medicine

Ezekiel J. Emanuel
The Wall Street Journal
Originally posted Nov. 10, 2017

Here is an excerpt:

But none of this will have much of an effect on the big and unsolved challenge for American medicine: how to change the behavior of patients. According to the Centers for Disease Control and Prevention, fully 86% of all health care spending in the U.S. is for patients with chronic illness—emphysema, arthritis and the like. How are we to make real inroads against these problems? Patients must do far more to monitor their diseases, take their medications consistently and engage with their primary-care physicians and nurses. In the longer term, we need to lower the number of Americans who suffer from these diseases by getting them to change their habits and eat healthier diets, exercise more and avoid smoking.

There is no reason to think that virtual medicine will succeed in inducing most patients to cooperate more with their own care, no matter how ingenious the latest gizmos. Many studies that have tried some high-tech intervention to improve patients’ health have failed.

Consider the problem of patients who do not take their medication properly, leading to higher rates of complications, hospitalization and even mortality. Researchers at Harvard, in collaboration with CVS, published a study in JAMA Internal Medicine in May comparing different low-cost devices for encouraging patients to take their medication as prescribed. The more than 50,000 participants were randomly assigned to one of three options: high-tech pill bottles with digital timer caps, pillboxes with daily compartments or standard plastic pillboxes. The high-tech pill bottles did nothing to increase compliance.

Other efforts have produced similar failures.

The article is here.

A Lost World

Michael Sacasas
thefrailestthing.com
Originally posted January 29, 2017

Here is the conclusion:

Rather, it is a situation in which moral evaluations themselves have shifted. It is not that some people now lied and called an act of thoughtless aggression a courageous act. It is that what had before been commonly judged to be an act of thoughtless aggression was now judged by some to be a courageous act. In other words, it would appear that in very short order, moral judgments and the moral vocabulary in which they were expressed shifted dramatically.

It brings to mind Hannah Arendt’s frequent observation about how quickly the self-evidence of long-standing moral principles were overturned in Nazi Germany: “… it was as though morality suddenly stood revealed in the original meaning of the word, as a set of mores, customs and manners, which could be exchanged for another set with hardly more trouble than it would take to change the table manners of an individual or a people.”

It is shortsighted, at this juncture, to ask how we can find agreement or even compromise. We do not, now, even know how to disagree well; nothing like an argument in the traditional sense is being had. It is an open question whether anyone can even be said to be speaking intelligibly to anyone who does not already fully agree with their positions and premises. The common world that is both the condition of speech and its gift to us is withering away. A rift has opened up in our political culture that will not be mended until we figure out how to reconstruct the conditions under which speech can once again become meaningful. Until then, I fear, the worst is still before us.

The post is here.

Tuesday, November 28, 2017

Trusting big health data

Angela Villanueva
Baylor College of Medicine Blogs
Originally posted November 10, 2017

Here is an excerpt:

Potentially exacerbating this mistrust is a sense of loss of privacy and absence of control over information describing us and our habits. Given the extent of current “everyday” data collection and sharing for marketing and other purposes, this lack of trust is not unreasonable.

Health information sharing makes many people uneasy, particularly because of the potential harms such as insurance discrimination or stigmatization. Data breaches like the recent Equifax hack may add to these concerns and affect people’s willingness to share their health data.

But it is critical to encourage members of all groups to participate in big data initiatives focused on health in order for all to benefit from the resulting discoveries. My colleagues and I recently published an article detailing eight guiding principles for successful data sharing; building trust is one of them.

Here is the article.

Don’t Nudge Me: The Limits of Behavioral Economics in Medicine

Aaron E. Carroll
The New York Times - The Upshot
Originally posted November 6, 2017

Here is an excerpt:

But those excited about the potential of behavioral economics should keep in mind the results of a recent study. It pulled out all the stops in trying to get patients who had a heart attack to be more compliant in taking their medication. (Patients’ adherence at such a time is surprisingly low, even though it makes a big difference in outcomes, so this is a major problem.)

Researchers randomly assigned more than 1,500 people to one of two groups. All had recently had heart attacks. One group received the usual care. The other received special electronic pill bottles that monitored patients’ use of medication. Those patients who took their drugs were entered into a lottery in which they had a 20 percent chance to receive $5 and a 1 percent chance to win $50 every day for a year.

That’s not all. The lottery group members could also sign up to have a friend or family member automatically be notified if they didn’t take their pills so that they could receive social support. They were given access to special social work resources. There was even a staff engagement adviser whose specific duty was providing close monitoring and feedback, and who would remind patients about the importance of adherence.

This was a kitchen-sink approach. It involved direct financial incentives, social support nudges, health care system resources and significant clinical management. It failed.

The article is here.

Monday, November 27, 2017

Social Media Channels in Health Care Research and Rising Ethical Issues

Samy A. Azer
AMA Journal of Ethics. November 2017, Volume 19, Number 11: 1061-1069.

Abstract

Social media channels such as Twitter, Facebook, and LinkedIn have been used as tools in health care research, opening new horizons for research on health-related topics (e.g., the use of mobile social networking in weight loss programs). While there have been efforts to develop ethical guidelines for internet-related research, researchers still face unresolved ethical challenges. This article investigates some of the risks inherent in social media research and discusses how researchers should handle challenges related to confidentiality, privacy, and consent when social media tools are used in health-related research.

Here is an excerpt:

Social Media Websites and Ethical Challenges

While one may argue that regardless of the design and purpose of social media websites (channels) all information conveyed through social media should be considered public and therefore usable in research, such a generalization is incorrect and does not reflect the principles we follow in other types of research. The distinction between public and private online spaces can blur, and in some situations it is difficult to draw a line. Moreover, as discussed later, social media channels operate under different rules than research, and thus using these tools in research may raise a number of ethical concerns, particularly in health-related research. Good research practice fortifies high-quality science; ethical standards, including integrity; and the professionalism of those conducting the research. Importantly, it ensures the confidentiality and privacy of information collected from individuals participating in the research. Yet, in social media research, there are challenges to ensuring confidentiality, privacy, and informed consent.

The article is here.

Suicide Is Not The Same As "Physician Aid In Dying"

American Association of Suicidology
Suicide Is Not The Same As "Physician Aid In Dying"
Approved October 30, 2017

Executive summary 

The American Association of Suicidology recognizes that the practice of physician aid in dying, also called physician assisted suicide, Death with Dignity, and medical aid in dying, is distinct from the behavior that has been traditionally and ordinarily described as “suicide,” the tragic event our organization works so hard to prevent. Although there may be overlap between the two categories, legal physician assisted deaths should not be considered to be cases of suicide and are therefore a matter outside the central focus of the AAS.

(cut)

Conclusion 

In general, suicide and physician aid in dying are conceptually, medically, and legally different phenomena, with an undetermined amount of overlap between these two categories. The American Association of Suicidology is dedicated to preventing suicide, but this has no bearing on the reflective, anticipated death a physician may legally help a dying patient facilitate, whether called physician-assisted suicide, Death with Dignity, physician assisted dying, or medical aid in dying. In fact, we believe that the term “physician-assisted suicide” in itself constitutes a critical reason why these distinct death categories are so often conflated, and should be deleted from use. Such deaths should not be considered to be cases of suicide and are therefore a matter outside the central focus of the AAS.

The full document is here.

Sunday, November 26, 2017

The Wisdom in Virtue: Pursuit of Virtue Predicts Wise Reasoning About Personal Conflicts

Alex C. Huynh, Harrison Oakes, Garrett R. Shay, & Ian McGregor
Psychological Science
Article first published online: October 3, 2017

Abstract

Most people can reason relatively wisely about others’ social conflicts, but often struggle to do so about their own (i.e., Solomon’s paradox). We suggest that true wisdom should involve the ability to reason wisely about both others’ and one’s own social conflicts, and we investigated the pursuit of virtue as a construct that predicts this broader capacity for wisdom. Results across two studies support prior findings regarding Solomon’s paradox: Participants (N = 623) more strongly endorsed wise-reasoning strategies (e.g., intellectual humility, adopting an outsider’s perspective) for resolving other people’s social conflicts than for resolving their own. The pursuit of virtue (e.g., pursuing personal ideals and contributing to other people) moderated this effect of conflict type. In both studies, greater endorsement of the pursuit of virtue was associated with greater endorsement of wise-reasoning strategies for one’s own personal conflicts; as a result, participants who highly endorsed the pursuit of virtue endorsed wise-reasoning strategies at similar levels for resolving their own social conflicts and resolving other people’s social conflicts. Implications of these results and underlying mechanisms are explored and discussed.

Here is an excerpt:

We propose that the litmus test for wise character is whether one can reason wisely about one’s own social conflicts. As did the biblical King Solomon, people tend to reason more wisely about others’ social conflicts than their own (i.e., Solomon’s paradox; Grossmann & Kross, 2014, see also Mickler & Staudinger, 2008, for a discussion of personal vs. general wisdom). Personal conflicts impede wise reasoning because people are more likely to immerse themselves in their own perspective and emotions, relegating other perspectives out of awareness, and increasing certainty regarding preferred perspectives (Kross & Grossmann, 2012; McGregor, Zanna, Holmes, & Spencer, 2001). In contrast, reasoning about other people’s conflicts facilitates wise reasoning through the adoption of different viewpoints and the avoidance of sociocognitive biases (e.g., poor recognition of one’s own shortcomings—e.g., Pronin, Olivola, & Kennedy, 2008). In the present research, we investigated whether virtuous motives facilitate wisdom about one’s own conflicts, enabling one to pass the litmus test for wise character.

The article is here.

Saturday, November 25, 2017

Rather than being free of values, good science is transparent about them

Kevin Elliott
The Conversation
Originally published November 8, 2017

Scientists these days face a conundrum. As Americans are buffeted by accounts of fake news, alternative facts and deceptive social media campaigns, how can researchers and their scientific expertise contribute meaningfully to the conversation?

There is a common perception that science is a matter of hard facts and that it can and should remain insulated from the social and political interests that permeate the rest of society. Nevertheless, many historians, philosophers and sociologists who study the practice of science have come to the conclusion that trying to kick values out of science risks throwing the baby out with the bathwater.

Ethical and social values – like the desire to promote economic development, public health or environmental protection – often play integral roles in scientific research. By acknowledging this, scientists might seem to give away their authority as a defense against the flood of misleading, inaccurate information that surrounds us. But I argue in my book “A Tapestry of Values: An Introduction to Values in Science” that if scientists take appropriate steps to manage and communicate about their values, they can promote a more realistic view of science as both value-laden and reliable.

The article is here.

Friday, November 24, 2017

Trump presidency spurs cottage industry of ethics watchdogs

Fredreka Schouten
USA Today
Originally posted November 23, 2017

Here is an excerpt:

The groups pursuing Trump say they are trying to keep close tabs on a president who is bucking ethical norms by retaining ownership of his businesses and abruptly firing FBI Director James Comey, who was leading the agency’s probe into the Russian government involvement in last year’s election.

“We are in a crisis of ethics,” said Noah Bookbinder, the executive director of Citizens for Responsibility and Ethics in Washington or CREW. “There are ethics a
nd conflicts and influence problems in this administration unlike any we have ever seen. And it began with the president’s decision not to divest from his businesses.”

White House officials this week contended that Trump is operating ethically. As an example, they point to his signing of a far-reaching ethics policy that, among other things, tries to slow the revolving door between government and industry by imposing a five-year cooling-off period before former government appointees can work as lobbyists.

“An organized onslaught from partisan groups committed to undermining the President’s agenda can’t change the fact that he has elevated ethics within this administration,” White House spokesman Raj Shah said in a statement.

The information is here.

Navigating Political Talk at Work

David W. Ballard
Harvard Business Review
Originally posted March 2, 2017

Here is an excerpt:

Managers should recognize that the current political environment could be having an effect on people, especially if they’re talking about it in the office. Be aware of employees’ stress levels, share information about benefits and resources that are available to help support them, and encourage appropriate use of your company’s employee assistance program, mental health benefits, flexible work arrangements, and workplace wellness activities that can help people stay healthy and functioning at their best.

Senior leaders and supervisors can communicate a powerful message by modeling the behavior and actions they’re trying to promote in the organization. By demonstrating civility and respect, actively using available support resources, participating in organizational activities, and managing their own stress levels in healthy ways, business leaders can back their words with actions that show they are serious about creating a healthy work environment.

Focusing on common goals and shared values is another way to bring people together despite their differences. As a manager, set clear goals for your team and focus people on working together toward common objectives. When political turmoil is creating tension and distraction, focusing on the work and accomplishing something together may be a welcome reprieve.

Finally, step in if things get too heated. If the current political climate is negatively affecting an employee’s job performance, address the issue before it creates a bigger problem. Provide the necessary feedback, work with the employee to create a plan, and point them to available resources that might help. When tensions turn into conflicts between coworkers, counsel employees on any relevant policies related to harassment or incivility, help them find ways to work together, and involve human resources as needed.

The article is here.

Thursday, November 23, 2017

Tiny human brain organoids implanted into rodents, triggering ethical concerns

Sharon Begley
STAT News
Originally posted November 6, 2017

Here is an excerpt:

He and his colleagues discussed the ethics of implanting human brain organoids into rats, including whether the animals might become too human. “Some of what people warn about is still science fiction,” he said. “Right now, the organoids are so crude we probably decrease” the rats’ brain function.

Ethicists argue that “not a problem now” doesn’t mean “never a problem.” One concern raised by the human brain organoid implants “is that functional integration [of the organoids] into the central nervous system of animals can in principle alter an animal’s behavior or needs,” said bioethicist Jonathan Kimmelman of McGill University in Montreal. “The task, then, is to carefully monitor if such alterations occur.” If the human implant gives an animal “increased sentience or mental capacities,” he added, it might suffer more.

Would it feel like a human trapped in a rodent’s body? Because both the Salk and Penn experiments used adult rodents, their brains were no longer developing, unlike the case if implants had been done with fetal rodent brains. “It’s hard to imagine how human-like cognitive capacities, like consciousness, could emerge under such circumstances,” Kimmelman said, referring to implants into an adult rodent brain. Chen agreed: He said his experiment “carries much less risk of creating animals with greater ‘brain power’ than normal” because the human organoid goes into “a specific region of already developed brain.”

The belief that consciousness is off the table is in fact the subject of debate. An organoid would need to be much more advanced than today’s to experience consciousness, said the Allen Institute’s Koch, including having dense neural connections, distinct layers, and other neuro-architecture. But if those and other advances occur, he said, “then the question is very germane: Does this piece of cortex feel something?” Asked whether brain organoids can achieve consciousness without sensory organs and other means of perceiving the world, Koch said it would experience something different than what people and other animals do: “It raises the question, what is it conscious of?”

The article is here.

Wednesday, November 22, 2017

The Public’s Distrust of Biotech Is Deepening. Commercialization May Be to Blame.

Jim Kozubek
undark.org
Originally published November 3, 2017

Here is an excerpt:

The high profile patent battle over the CRISPR-Cas9 gene editing tool, often valued commercially at a billion dollars, and the FDA approval of the first genetically modified medicine for $475,000 — a sale price that is 19 times the cost to manufacture it — have displayed the capacity for turning taxpayer-funded research into an aggressive money-making enterprise. More personally, genetics are being used to typify people for cancer risk and age-related diseases, schizophrenia, autism, and intelligence, none of which truly belong to diagnostic categories.

It is therefore no surprise that parents may want to protect their newborns from becoming targets of commercialization.

In truth, genome sequencing is an extension of earlier commercial sequencing tests and standard newborn screening tests. BabySeq has expanded these to 166 genes, which can theoretically predict thousands of disorders and identify several genetic risk variants. For instance, it has identified a dozen newborns to have a genetic variant associated with biotinidase deficiency, which can impact cognition, and be fixed by taking a simple vitamin. Casie Genetti, a researcher at Boston Children’s Hospital, noted researchers found 109 of 125 babies had at least one, and up to six, genetic variants for an autosomal recessive disorder, meaning that if they went on to have children with a partner who had a corresponding gene compromised in a similar way, it could be damaging or life-threatening for their own baby.

Part of the problem is that we all have some measure of genetic variation, and that can be either dangerous or advantageous depending on the cell type or genetic background or environment.

The article is here.

Many Academics Are Eager to Publish in Worthless Journals

Gina Kolata
The New York Times
Originally published October 30, 2017

Here is an excerpt:

Yet “every university requires some level of publication,” said Lawrence DiPaolo, vice president of academic affairs at Neumann University in Aston, Pa.

Recently a group of researchers invented a fake academic: Anna O. Szust. The name in Polish means fraudster. Dr. Szust applied to legitimate and predatory journals asking to be an editor. She supplied a résumé in which her publications and degrees were total fabrications, as were the names of the publishers of the books she said she had contributed to.

The legitimate journals rejected her application immediately. But 48 out of 360 questionable journals made her an editor. Four made her editor in chief. One journal sent her an email saying, “It’s our pleasure to add your name as our editor in chief for the journal with no responsibilities.”

The lead author of the Dr. Szust sting operation, Katarzyna Pisanski, a psychologist at the University of Sussex in England, said the question of what motivates people to publish in such journals “is a touchy subject.”

“If you were tricked by spam email you might not want to admit it, and if you did it wittingly to increase your publication counts you might also not want to admit it,” she said in an email.

The consequences of participating can be more than just a résumé freckled with poor-quality papers and meeting abstracts.

Publications become part of the body of scientific literature.

There are indications that some academic institutions are beginning to wise up to the dangers.

Dewayne Fox, an associate professor of fisheries at Delaware State University, sits on a committee at his school that reviews job applicants. One recent applicant, he recalled, listed 50 publications in such journals and is on the editorial boards of some of them.

A few years ago, he said, no one would have noticed. But now he and others on search committees at his university have begun scrutinizing the publications closely to see if the journals are legitimate.

The article is here.

Tuesday, November 21, 2017

What The Good Place Can Teach You About Morality

Patrick Allan
Lifehacker.com
Originally posted November 6, 2017

Here is an excerpt:

Doing “Good” Things Doesn’t Necessarily Make You a Good Person

In The Good Place, the version of the afterlife you get sent to is based on a complicated point system. Doing “good” deeds earns you a certain number of positive points, and doing “bad” things will subtract them. Your point total when you die is what decides where you’ll go. Seems fair, right?

Despite the fact The Good Place makes life feel like a point-based videogame, we quickly learn morality isn’t as black and white as positive points and negative points. At one point, Eleanor tries to rack up points by holding doors for people; an action worth 3 points a pop. To put that in perspective, her score is -4,008 and she needs to meet the average of 1,222,821. It would take her a long time to get there but it’s one way to do it. At least, it would be if it worked. She quickly learns after awhile that she didn’t earn any points because she’s not actually trying to be nice to people. Her only goal is to rack up points so she can stay in The Good Place, which is an inherently selfish reason. The situation brings up a valid question: are “good” things done for selfish reasons still “good” things?

I don’t want to spoil too much, but as the series goes on, we see this question asked time and time again with each of its characters. Chidi may have spent his life studying moral ethics, but does knowing everything about pursuing “good” mean you are? Tahani spent her entire life as a charitable philanthropist, but she did it all for the questionable pursuit of finally outshining her near-perfect sister. She did a lot of good, but is she “good?” It’s something to consider yourself as you go about your day. Try to do “good” things, but ask yourself every once in awhile who those “good” things are really for.

The article is here.

Note: I really enjoy watching The Good Place.  Very clever. 

My spoiler: I think Michael is supposed to be in The Good Place too, not really the architect.

Harnessing the Placebo Effect: Exploring the Influence of Physician Characteristics on Placebo Response

Lauren C. Howe, J. Parker Goyer, and Alia J. Crum
Health Psychology, 36(11), 1074-1082.

Abstract

Objective: Research on placebo/nocebo effects suggests that expectations can influence treatment outcomes, but placebo/nocebo effects are not always evident. This research demonstrates that a provider’s social behavior moderates the effect of expectations on physiological outcomes.

Methods: After inducing an allergic reaction in participants through a histamine skin prick test, a health care provider administered a cream with no active ingredients and set either positive expectations (cream will reduce reaction) or negative expectations (cream will increase reaction). The provider demonstrated either high or low warmth, or either high or low competence.

Results: The impact of expectations on allergic response was enhanced when the provider acted both warmer and more competent and negated when the provider acted colder and less competent.

Conclusion: This study suggests that placebo effects should be construed not as a nuisance variable with mysterious impact but instead as a psychological phenomenon that can be understood and harnessed to improve treatment outcomes.

Link to the pdf is here.

Monday, November 20, 2017

Best-Ever Algorithm Found for Huge Streams of Data

Kevin Hartnett
Wired Magazine
Originally published October 29, 2017

Here is an excerpt:

Computer programs that perform these kinds of on-the-go calculations are called streaming algorithms. Because data comes at them continuously, and in such volume, they try to record the essence of what they’ve seen while strategically forgetting the rest. For more than 30 years computer scientists have worked to build a better streaming algorithm. Last fall a team of researchers invented one that is just about perfect.

“We developed a new algorithm that is simultaneously the best” on every performance dimension, said Jelani Nelson, a computer scientist at Harvard University and a co-author of the work with Kasper Green Larsen of Aarhus University in Denmark, Huy Nguyen of Northeastern University and Mikkel Thorup of the University of Copenhagen.

This best-in-class streaming algorithm works by remembering just enough of what it’s seen to tell you what it’s seen most frequently. It suggests that compromises that seemed intrinsic to the analysis of streaming data are not actually necessary. It also points the way forward to a new era of strategic forgetting.

Why we pretend to know things, explained by a cognitive scientist

Sean Illing
Vox.com
Originally posted November 3, 2017

Why do people pretend to know things? Why does confidence so often scale with ignorance? Steven Sloman, a professor of cognitive science at Brown University, has some compelling answers to these questions.

“We're biased to preserve our sense of rightness,” he told me, “and we have to be.”

The author of The Knowledge Illusion: Why We Never Think Alone, Sloman’s research focuses on judgment, decision-making, and reasoning. He’s especially interested in what’s called “the illusion of explanatory depth.” This is how cognitive scientists refer to our tendency to overestimate our understanding of how the world works.

We do this, Sloman says, because of our reliance on other minds.

“The decisions we make, the attitudes we form, the judgments we make, depend very much on what other people are thinking,” he said.

If the people around us are wrong about something, there’s a good chance we will be too. Proximity to truth compounds in the same way.

In this interview, Sloman and I talk about the problem of unjustified belief. I ask him about the political implications of his research, and if he thinks the rise of “fake news” and “alternative facts” has amplified our cognitive biases.

The interview/article is here.

Sunday, November 19, 2017

Rigorous Study Finds Antidepressants Worsen Long-Term Outcomes

Peter Simons
madinamerica.com
Originally posted

Here is an excerpt:

These results add to a body of research that indicates that antidepressants worsen long-term outcomes. In an article published in 1994, the psychiatrist Giovanni Fava wrote that “Psychotropic drugs actually worsen, at least in some cases, the progression of the illness which they are supposed to treat.” In a 2003 article, he wrote: “A statistical trend suggested that the longer the drug treatment, the higher the likelihood of relapse.”

Previous research has also found that antidepressants are no more effective than placebo for mild-to-moderate depression, and other studies have questioned whether such medications are effective even for severe depression. Concerns have also been raised about the health risks of taking antidepressants—such as a recent study which found that taking antidepressants increases one’s risk of death by 33% (see MIA report).

In fact, studies have demonstrated that as many as 85% of people recover spontaneously from depression. In a recent example, researchers found that only 35% of people who experienced depression had a second episode within 15 years. That means that 65% of people who have a bout of depression are likely never to experience it again.

Critics of previous findings have argued that it is not fair to compare those receiving antidepressants with those who do not. They argue that initial depression severity confounds the results—those with more severe symptoms may be more likely to be treated with antidepressants. Thus, according to some researchers, even if antidepressants worked as well as psychotherapy or receiving no treatment, those treated with antidepressants would still show worse outcomes—because they had more severe symptoms in the first place.

The article is here.

The target article is here.

Saturday, November 18, 2017

For some evangelicals, a choice between Moore and morality

Marc Fisher
The Washington Post
Originally posted November 16, 2017

Here is an excerpt:

What’s happening in the churches of Alabama — a state where half the residents consider themselves evangelical Christians, double the national average, according to a Pew Research study — is nothing less than a battle for the meaning of evangelism, some church leaders say. It is a titanic struggle between those who believe there must be one clear, unalterable moral standard and those who argue that to win the war for the nation’s soul, Christians must accept morally flawed leaders.

Evangelicals are not alone in shifting their view of the role moral character should play in choosing political leaders. Between 2011 and last year, the percentage of Americans who say politicians who commit immoral acts in their private lives can still behave ethically in public office jumped to 61 percent from 44 percent, according to a Public Religion Research Institute/Brookings poll. During the same period, the shift among evangelicals was even more dramatic, moving from to 72 percent from 30 percent, the survey found.

“What you’re seeing here is rank hypocrisy,” said John Fea, an evangelical Christian who teaches history at Messiah College in Mechanicsburg, Pa. “These are evangelicals who have decided that the way to win the culture is now uncoupled from character. Their goal is the same as it was 30 years ago, to restore America to its Christian roots, but the political playbook has changed.

The article is here.

And yes, I live in Mechanicsburg, PA, by I don't know John Fea.

Differential inter-subject correlation of brain activity when kinship is a variable in moral dilemma

Mareike Bacha-Trams, Enrico Glerean, Robin Dunbar, Juha M. Lahnakoski, and others
Scientific Reports 7, Article number: 14244

Abstract

Previous behavioural studies have shown that humans act more altruistically towards kin. Whether and how knowledge of genetic relatedness translates into differential neurocognitive evaluation of observed social interactions has remained an open question. Here, we investigated how the human brain is engaged when viewing a moral dilemma between genetic vs. non-genetic sisters. During functional magnetic resonance imaging, a movie was shown, depicting refusal of organ donation between two sisters, with subjects guided to believe the sisters were related either genetically or by adoption. Although 90% of the subjects self-reported that genetic relationship was not relevant, their brain activity told a different story. Comparing correlations of brain activity across all subject pairs between the two viewing conditions, we found significantly stronger inter-subject correlations in insula, cingulate, medial and lateral prefrontal, superior temporal, and superior parietal cortices, when the subjects believed that the sisters were genetically related. Cognitive functions previously associated with these areas include moral and emotional conflict regulation, decision making, and mentalizing, suggesting more similar engagement of such functions when observing refusal of altruism from a genetic sister. Our results show that mere knowledge of a genetic relationship between interacting persons robustly modulates social cognition of the perceiver.

The article is here.

Friday, November 17, 2017

Going with your gut may mean harsher moral judgments

Jeff Sossamon
www.futurity.org
Originally posted November 2, 2017

Going with your intuition could make you judge others’ moral transgressions more harshly and keep you from changing your mind, even after considering all the facts, a new study suggests.

The findings show that people who strongly rely on intuition automatically condemn actions they perceive to be morally wrong, even if there is no actual harm.

In psychology, intuition, or “gut instinct,” is defined as the ability to understand something immediately, without the need for reasoning.

“It is now widely acknowledged that intuitive processing influences moral judgment,” says Sarah Ward, a doctoral candidate in social and personality psychology at the University of Missouri.

“We thought people who were more likely to trust their intuition would be more likely to condemn things that are shocking, whereas people who don’t rely on gut feelings would not condemn these same actions as strongly,” Ward says.

Ward and Laura King, professor of psychological sciences, had study participants read through a series of scenarios and judge whether the action was wrong, such as an individual giving a gift to a partner that had previously been purchased for an ex.

The article is here.

The Illusion of Moral Superiority

Ben M. Tappin and Ryan T. McKay
Social Psychological and Personality Science
Volume: 8 issue: 6, page(s): 623-631
Issue published: August 1, 2017 

Abstract

Most people strongly believe they are just, virtuous, and moral; yet regard the average person as distinctly less so. This invites accusations of irrationality in moral judgment and perception—but direct evidence of irrationality is absent. Here, we quantify this irrationality and compare it against the irrationality in other domains of positive self-evaluation. Participants (N = 270) judged themselves and the average person on traits reflecting the core dimensions of social perception: morality, agency, and sociability. Adapting new methods, we reveal that virtually all individuals irrationally inflated their moral qualities, and the absolute and relative magnitude of this irrationality was greater than that in the other domains of positive self-evaluation. Inconsistent with prevailing theories of overly positive self-belief, irrational moral superiority was not associated with self-esteem. Taken together, these findings suggest that moral superiority is a uniquely strong and prevalent form of “positive illusion,” but the underlying function remains unknown.

The article is here.

Thursday, November 16, 2017

Is There a Right Way to Nudge? The Practice and Ethics of Choice Architecture

Evan Selinger and Kyle Whyte
Sociology Compass, Vol. 5, No. 10, pp. 923-935

Abstract

What exactly is a nudge, and how do nudges differ from alternative ways of modifying people's behavior, such as fines or penalties (e.g. taxing smokers) and increasing access to information (e.g. calorie counts on restaurant menus)? We open Section 2 by defining the concept of a nudge and move on to present some examples of nudges. Though there is certainly a clear concept of what a nudge is, there is some confusion when people design and talk about nudges in practice. In Sections 3 and 4, then, we discuss policies and technologies that get called nudges mistakenly as well as borderline cases where it is unclear whether people are being nudged. Understanding mistaken nudges and borderline cases allows citizens to consider critically whether they should support “alleged” nudge policies proposed by governments, corporations, and non-profit organizations. There are also important concerns about the ethics of nudging people's behavior. In Section 5 we review some major ethical and political issues surrounding nudges, covering both public anxieties and more formal scholarly criticisms. If nudges are to be justified as an acceptable form of behavior modification in democratic societies, nudge advocates must have reasons that allay anxieties and ethical concerns. However, in Section 6, we argue that nudge advocates must confront a particularly challenging problem. A strong justification of nudging, especially for pluralistic democracies, must show that nudge designers really understand how different people re-interpret the meaning of situations after a nudge has been introduced into the situations. We call this the problem of “semantic variance.” This problem, along with the ethical issues we discussed, makes us question whether nudges are truly viable mechanisms for improving people's lives and societies. Perhaps excitement over their potential of nudges is exaggerated.

The article is here.

Moral Hard-Wiring and Moral Enhancement

Introduction

In a series of papers (Persson & Savulescu 2008; 2010; 2011a; 2012a; 2013; 2014a) and book (Persson & Savulescu 2012b), we have argued that there is an urgent need to pursue research into the possibility of moral enhancement by biomedical means – e.g. by pharmaceuticals, non-invasive brain stimulation, genetic modification or other means directly modifying biology. The present time brings existential threats which human moral psychology, with its cognitive and moral limitations and biases, is unfit to address.  Exponentially increasing, widely accessible technological advance and rapid globalisation create threats of intentional misuse (e.g. biological or nuclear terrorism) and global collective action problems, such as the economic inequality between developed and developing countries and anthropogenic climate change, which human psychology is not set up to address. We have hypothesized that these limitations are the result of the evolutionary function of morality being to maximize the fitness of small cooperative groups competing for resources. Because these limitations of human moral psychology pose significant obstacles to coping with the current moral mega-problems, we argued that biomedical modification of human moral psychology may be necessary.  We have not argued that biomedical moral enhancement would be a single “magic
bullet” but rather that it could play a role in a comprehensive approach which also features cultural and social measures.

The paper is here.

Wednesday, November 15, 2017

The U.S. Is Retreating from Religion

Allen Downey
Scientific American
Originally published on October 20, 2017

Since 1990, the fraction of Americans with no religious affiliation has nearly tripled, from about 8 percent to 22 percent. Over the next 20 years, this trend will accelerate: by 2020, there will be more of these "Nones" than Catholics, and by 2035, they will outnumber Protestants.

The following figure shows changes since 1972 and these predictions, based on data from the General Social Survey (GSS):




Catholic Hospital Group Grants Euthanasia to Mentally Ill, Defying Vatican

Francis X. Rocca
The Wall Street Journal
Originally posted October 27, 2017

A chain of Catholic psychiatric hospitals in Belgium is granting euthanasia to non-terminal patients, defying the Vatican and deepening a challenge to the church’s commitment to a constant moral code.

The board of the Brothers of Charity, Belgium’s largest single provider of psychiatric care, said the decision no longer belongs to Rome.

Truly Christian values, the board argued in September, should privilege a “person’s choice of conscience” over a “strict ethic of rules.”

The policy change is highly symbolic, said Didier Pollefeyt, a theologian and vice rector of the Catholic University of Leuven.

“The Brothers of Charity have been seen as a beacon of hope and resistance” to euthanasia, he said. “Now that the most Catholic institution gives up resistance, it looks like the most normal thing in the world.”

Belgium legalized euthanasia in 2002, the first country with a majority Catholic population to do so. Belgian bishops opposed the legislation, in line with the church’s catechism, which states that causing the death of the handicapped, sick or dying to eliminate their suffering is murder.

The article is here.

Tuesday, November 14, 2017

What is consciousness, and could machines have it?

Stanislas Dehaene, Hakwan Lau, & Sid Kouider
Science  27 Oct 2017: Vol. 358, Issue 6362, pp. 486-492

Abstract

The controversial question of whether machines may ever be conscious must be based on a careful consideration of how consciousness arises in the only physical system that undoubtedly possesses it: the human brain. We suggest that the word “consciousness” conflates two different types of information-processing computations in the brain: the selection of information for global broadcasting, thus making it flexibly available for computation and report (C1, consciousness in the first sense), and the self-monitoring of those computations, leading to a subjective sense of certainty or error (C2, consciousness in the second sense). We argue that despite their recent successes, current machines are still mostly implementing computations that reflect unconscious processing (C0) in the human brain. We review the psychological and neural science of unconscious (C0) and conscious computations (C1 and C2) and outline how they may inspire novel machine architectures.

The article is here.

Facial recognition may reveal things we’d rather not tell the world. Are we ready?

Amitha Kalaichandran
The Boston Globe
Originally published October 27, 2017

Here is an excerpt:

Could someone use a smartphone snapshot, for example, to diagnose another person’s child at the playground? The Face2Gene app is currently limited to clinicians; while anyone can download it from the App Store on an iPhone, it can only be used after the user’s healthcare credentials are verified. “If the technology is widespread,” says Lin, “do I see people taking photos of others for diagnosis? That would be unusual, but people take photos of others all the time, so maybe it’s possible. I would obviously worry about the invasion of privacy and misuse if that happened.”

Humans are pre-wired to discriminate against others based on physical characteristics, and programmers could easily manipulate AI programming to mimic human bias. That’s what concerns Anjan Chatterjee, a neuroscientist who specializes in neuroesthetics, the study of what our brains find pleasing. He has found that, relying on baked-in prejudices, we often quickly infer character just from seeing a person’s face. In a paper slated for publication in Psychology of Aesthetics, Creativity, and the Arts, Chatterjee reports that a person’s appearance — and our interpretation of that appearance — can have broad ramifications in professional and personal settings. This conclusion has serious implications for artificial intelligence.

“We need to distinguish between classification and evaluation,” he says. “Classification would be, for instance, using it for identification purposes like fingerprint recognition. . . which was once a privacy concern but seems to have largely faded away. Using the technology for evaluation would include discerning someone’s sexual orientation or for medical diagnostics.” The latter raises serious ethical questions, he says. One day, for example, health insurance companies could use this information to adjust premiums based on a predisposition to a condition.

The article is here.

Monday, November 13, 2017

Will life be worth living in a world without work? Technological Unemployment and the Meaning of Life

John Danaher
forthcoming in Science and Engineering Ethics

Abstract

Suppose we are about to enter an era of increasing technological unemployment. What implications does this have for society? Two distinct ethical/social issues would seem to arise. The first is one of distributive justice: how will the (presumed) efficiency gains from automated labour be distributed through society? The second is one of personal fulfillment and meaning: if  people no longer have to work, what will they do with their lives? In this article, I set aside the first issue and focus on the second. In doing so, I make three arguments. First, I argue that there are good reasons to embrace non-work and that these reasons become more compelling in an era of technological unemployment. Second, I argue that the technological advances that make widespread technological unemployment possible could still threaten or undermine human flourishing and meaning, especially if (as is to be expected) they do not remain confined to the economic sphere. And third, I argue that this threat could be contained if we adopt an integrative  approach to our relationship with technology. In advancing these arguments, I draw on three distinct literatures: (i) the literature on technological unemployment and workplace automation; (ii) the antiwork critique — which I argue gives reasons to embrace technological unemployment; and (iii) the philosophical debate about the conditions for meaning in life — which I argue gives reasons for concern.

The article is here.