Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, philosophy and health care

Monday, August 21, 2017

Burnout at Work Isn’t Just About Exhaustion. It’s Also About Loneliness

Emma Seppala and Marissa King
Harvard Business Review
First published June 29, 2017

More and more people are feeling tired and lonely at work. In analyzing the General Social Survey of 2016, we found that, compared with roughly 20 years ago, people are twice as likely to report that they are always exhausted. Close to 50% of people say they are often or always exhausted due to work. This is a shockingly high statistic — and it’s a 32% increase from two decades ago. What’s more, there is a significant correlation between feeling lonely and work exhaustion: The more people are exhausted, the lonelier they feel.

This loneliness is not a result of social isolation, as you might think, but rather is due to the emotional exhaustion of workplace burnout. In researching the book The Happiness Track, we found that 50% of people — across professions, from the nonprofit sector to the medical field — are burned out. This isn’t just a problem for busy, overworked executives (though the high rates of loneliness and burnout among this group are well known). Our work suggests that the problem is pervasive across professions and up and down corporate hierarchies.

Loneliness, whether it results from social isolation or exhaustion, has serious consequences for individuals. John Cacioppo, a leading expert on loneliness and coauthor of Loneliness: Human Nature and the Need for Social Connection, emphasizes its tremendous impact on psychological and physical health and longevity. Research by Sarah Pressman, of the University of California, Irvine, corroborates his work and demonstrates that while obesity reduces longevity by 20%, drinking by 30%, and smoking by 50%, loneliness reduces it by a whopping 70%. In fact, one study suggests that loneliness increases your chance of stroke or coronary heart disease — the leading cause of death in developed countries — by 30%. On the other hand, feelings of social connection can strengthen our immune system, lengthen our life, and lower rates of anxiety and depression.

Tracking retractions as a window into the scientific process: Publisher won’t retract two papers, despite university’s request

Alison McCook
Retraction Watch
Originally published August 4, 2017

Jens Förster, a high-profile social psychologist, has agreed to retract multiple papers following an institutional investigation — but has also fought to keep some papers intact. Recently, one publisher agreed with his appeal, and announced it would not retract two of his papers, despite the recommendation of his former employer.

Last month, the American Psychological Association (APA) announced it would not retract two papers co-authored by Förster, which the University of Amsterdam had recommended for retraction in May, 2015. The APA had followed the university’s advice last year and retracted two other papers, which Förster had agreed to as part of a settlement with the German Society for Psychology (DGPs). But after multiple appeals by Förster and his co-authors, the publisher has decided to retain the papers as part of the scientific record.

The information is here.

Sunday, August 20, 2017

The ethics of creating GMO humans

The Editorial Board
The Los Angeles Times
Originally posted August 3, 2017

Here is an excerpt:

But there is also a great deal we still don’t know about how minor issues might become major ones as people pass on edited DNA to their offspring, and as people who have had some genes altered reproduce with people who have had other genes altered. We’ve seen how selectively breeding to produce one trait can unexpectedly produce other, less desirable outcomes. Remember how growers were able to create tomatoes that were more uniformly red, but in the process, they turned off the gene that gave tomatoes flavor?

Another major issue is the ethics of adjusting humans genetically to fit a favored outcome. Today it’s heritable disease, but what might be seen as undesirable traits in the future that people might want to eliminate? Short stature? Introverted personality? Klutziness?

To be sure, it’s not as though everyone is likely to line up for gene-edited offspring rather than just having babies, at least for the foreseeable future. The procedure can be performed only on in vitro embryos and requires precision timing.

The article is here.

Saturday, August 19, 2017

The role of empathy in experiencing vicarious anxiety

Shu, J., Hassell, S., Weber, J., Ochsner, K. N., & Mobbs, D. (2017).
Journal of Experimental Psychology: General, 146(8), 1164-1188.


With depictions of others facing threats common in the media, the experience of vicarious anxiety may be prevalent in the general population. However, the phenomenon of vicarious anxiety—the experience of anxiety in response to observing others expressing anxiety—and the interpersonal mechanisms underlying it have not been fully investigated in prior research. In 4 studies, we investigate the role of empathy in experiencing vicarious anxiety, using film clips depicting target victims facing threats. In Studies 1 and 2, trait emotional empathy was associated with greater self-reported anxiety when observing target victims, and with perceiving greater anxiety to be experienced by the targets. Study 3 extended these findings by demonstrating that trait empathic concern—the tendency to feel concern and compassion for others—was associated with experiencing vicarious anxiety, whereas trait personal distress—the tendency to experience distress in stressful situations—was not. Study 4 manipulated state empathy to establish a causal relationship between empathy and experience of vicarious anxiety. Participants who took an empathic perspective when observing target victims, as compared to those who took an objective perspective using reappraisal-based strategies, reported experiencing greater anxiety, risk-aversion, and sleep disruption the following night. These results highlight the impact of one’s social environment on experiencing anxiety, particularly for those who are highly empathic. In addition, these findings have implications for extending basic models of anxiety to incorporate interpersonal processes, understanding the role of empathy in social learning, and potential applications for therapeutic contexts.

The article is here.

CIA Psychologists Settle Torture Case Acknowledging Abuses

Peter Blumberg and Pamela Maclean
Bloomberg News
Originally published August 17, 2017

Two U.S. psychologists who helped design an overseas CIA interrogation program agreed to settle claims they were responsible for the torture of terrorism suspects, according to the American Civil Liberties Union, which brought the case.

The ACLU called the accord “historic” because it’s the first CIA-linked torture case of its kind that wasn’t dismissed, but said in a statement the terms of the settlement are confidential.

The case, which was set for a U.S. trial starting Sept. 5, focused on alleged abuses in the aftermath of the Sept. 11, 2001, attacks at secret “black-site” facilities that operated under President George W. Bush. The lawsuit followed the 2014 release of a congressional report on Central Intelligence Agency interrogation techniques.

The claims against the psychologists, who worked as government contractors, were filed on behalf of two suspected enemy combatants who were later released and a third who died in custody as a result of hypothermia during his captivity. All three men were interrogated at a site in Afghanistan, according to the ACLU.

ACLU lawyer Dror Ladin has said the case was a novel attempt to use the 1789 Alien Tort Claims Act to fix blame on U.S. citizens for human-rights violations committed abroad, unlike previous cases brought against foreigners.

The article is here.

Friday, August 18, 2017

Psychologists surveyed hundreds of alt-right supporters. The results are unsettling.

Brian Resnick
Originally posted August 15, 2017

Here is an excerpt:

The alt-right scores high on dehumanization measures

One of the starkest, darkest findings in the survey comes from a simple question: How evolved do you think other people are?

Kteily, the co-author on this paper, pioneered this new and disturbing way to measure dehumanization — the tendency to see others as being less than human. He simply shows study participants the following (scientifically inaccurate) image of a human ancestor slowly learning how to stand on two legs and become fully human.

Participants are asked to rate where certain groups fall on this scale from 0 to 100. Zero is not human at all; 100 is fully human.

On average, alt-righters saw other groups as hunched-over proto-humans.

On average, they rated Muslims at a 55.4 (again, out of 100), Democrats at 60.4, black people at 64.7, Mexicans at 67.7, journalists at 58.6, Jews at 73, and feminists at 57. These groups appear as subhumans to those taking the survey. And what about white people? They were scored at a noble 91.8. (You can look through all the data here.)

The article is here.

Trump fails morality test on Charlottesville

John Kass
Chicago Tribune
Originally posted on August 16, 2017

After the deadly violence of Charlottesville, Va., the amoral man in the White House failed his morality test. And in doing so, he gave the left a powerful weapon.


So President Trump was faced with a question of morality.

All he had to do was be unequivocal in his condemnation of the alt-right mob.

His brand as an alpha in a sea of political beta males promised he wouldn't be equivocal about anything.

But he failed, miserably, his mouth and tongue transformed into a dollop of lukewarm tapioca, talking in equivocal terms, about the violence on "many sides."

He then he offered another statement, ostensibly to clarify and condemn the mob. But that was followed, predictably, by even more comments, as he desperately tried to publicly litigate his earlier failures.

In doing so, he gave the alt-right all they could dream of.

He said some attending the rally were "fine people."

Fine people don't go to white supremacist rallies to spew hate. Fine people don't remotely associate with the KKK. Fine people at a protest see men in white hoods and leave.

Fine people don't get in a car and in a murderous rage, run others down, including Heather Heyer, who in her death has become a saint of the left.

The article is here.

Thursday, August 17, 2017

Donald Trump has a very clear attitude about morality: He doesn't believe in it

John Harwood | @johnjharwood
Originally published August 16, 2017

The more President Donald Trump reveals his character, the more he isolates himself from the American mainstream.

In a raucous press conference this afternoon, the president again blamed "both sides" for deadly violence in Charlottesville. He equated "Unite the Right" protesters — a collection including white supremacists, neo-Nazis and ex-KKK leader David Duke — with protesters who showed up to counter them.

Earlier he targeted business leaders — specifically, executives from Merck, Under Armour, Intel, and the Alliance for American Manufacturing — who had quit a White House advisory panel over Trump's message. In a tweet, the president called them "grandstanders."

That brought two related conclusions into focus. The president does not share the instinctive moral revulsion most Americans feel toward white supremacists and neo-Nazis. And he feels contempt for those — like the executives — who are motivated to express that revulsion at his expense.

No belief in others' morality

Trump has displayed this character trait repeatedly. It combines indifference to conventional notions of morality or propriety with disbelief that others would be motivated by them.

He dismissed suggestions that it was inappropriate for his son and campaign manager to have met with Russians offering dirt on Hillary Clinton during the presidential campaign. "Most people would have taken the meeting," he said. "Politics isn't the nicest business."

The article is here.

New Technology Standards Guide Social Work Practice and Education

Susan A. Knight
Social Work Today
Vol. 17 No. 4 P. 10

Today's technological landscape is vastly different from what it was just 10 to 15 years ago. Smartphones have replaced home landlines. Texting has become an accepted form of communication, both personally and professionally. Across sectors—health and human services, education, government, and business—employees conduct all manner of work on tablets and other portable devices. Along with "liking" posts on Facebook, people are tracking hashtags on Twitter, sending messages via Snapchat, and pinning pictures to Pinterest.

To top it all off, it seems that there's always a fresh controversy emerging because someone shared something questionable on a social media platform for the general public to see and critique.

Like every other field, social work practice is dealing with issues, challenges, and risks that were previously nonexistent. The NASW and Association of Social Work Boards (ASWB) Standards for Technology and Social Work Practice, dating back to 2005, was in desperate need of a rework in order to address all the changes and complexities within the technological environment that social workers are forced to contend with.

The newly released updated standards are the result of a collaborative effort between four major social work organizations: NASW, ASWB, the Clinical Social Work Association (CSWA), and the Council on Social Work Education (CSWE). "The intercollaboration in the development of the technology standards provides one consensus product and resource for social workers to refer to," says Mirean Coleman, MSW, LICSW, CT, clinical manager of NASW.

The article is here.

Wednesday, August 16, 2017

Learning morality through gaming

Jordan Erica Webber
The Guardian
Originally published 13 August 2017

Here is an excerpt:

Whether or not you agree with Snowden’s actions, the idea that playing video games could affect a person’s ethical position or even encourage any kind of philosophical thought is probably surprising. Yet we’re used to the notion that a person’s thinking could be influenced by the characters and conundrums in books, film and television; why not games? In fact, games have one big advantage that makes them especially useful for exploring philosophical ideas: they’re interactive.

As any student of philosophy will tell you, one of the primary ways of engaging with abstract questions is through thought experiments. Is Schrödinger’s cat dead or alive? Would you kill one person to save five? A thought experiment presents an imagined scenario (often because it wouldn’t be viable to perform the experiment in real life) to test intuitions about the consequences.

Video games, too, are made up of counterfactual narratives that test the player: here is a scenario, what would you do? Unlike books, film and television, games allow you to act on your intuition. Can you kill a character you’ve grown to know over hours of play, if it would save others?

The article is here.

What Does Patient Autonomy Mean for Doctors and Drug Makers?

Christina Sandefur
The Conversation
Originally published July 26, 2017

Here is an excerpt:

Although Bateman-House fears that deferring to patients comes at the expense of physician autonomy, she also laments that physicians currently abuse the freedom they have, failing to spend enough time with their patients, which she says undermines a patient’s ability to make informed medical decisions.

Even if it’s true that physician consultations aren’t as thorough as they once were, patients today have better access to health care information than ever before. According to the Pew Research Center, two-thirds of U.S. adults have broadband internet in their homes, and 13 percent who lack it can access the internet through a smartphone. Pew reports that more than half of adult internet users go online to get information on medical conditions, 43 percent on treatments, and 16 percent on drug safety. Yet despite their desire to research these issues online, 70 percent still sought out additional information from a doctor or other professional.

In other words, people are making greater efforts to learn about health care on their own. True, not all such information on the internet is accurate. But encouraging patients to seek out information from multiple sources is a good thing. In fact, requiring government approval of treatments may lull patients into a false sense of security. As Connor Boyack, president of the Libertas Institute, points out, “Instead of doing their own due diligence and research, the overwhelming majority of people simply concern themselves with whether or not the FDA says a certain product is okay to use.” But blind reliance on a government bureaucracy is rarely a good idea.

The article can be found here.

Tuesday, August 15, 2017

The ethical argument against philanthropy

Olivia Goldhill
Originally posted July 22, 2017

Exceptionally wealthy people aren’t a likeable demographic, but they have an easy way to boost personal appeal: Become an exceptionally wealthy philanthropist. When the rich use their money to support a good cause, we’re compelled to compliment their generosity and praise their selfless work.

This is entirely the wrong response, according to Rob Reich, director of the Center for Ethics in Society at Stanford University.

Big philanthropy is, he says, “the odd encouragement of a plutocratic voice in a democratic society.” By offering philanthropists nothing but gratitude, we allow a huge amount of power to go unchecked. “Philanthropy, if you define it as the deployment of private wealth for some public influence, is an exercise of power. In a democratic society, power deserves scrutiny,” he adds.

A philanthropic foundation is a form of unaccountable power quite unlike any other organization in society. Government is at least somewhat beholden to voters, and private companies must contend with marketplace competition and the demands of shareholders.

But until the day that government services alleviate all human need, perhaps we should be willing to overlook the power dynamics of philanthropy—after all, surely charity in unchecked form is better than nothing?

The article is here.

Inferences about moral character moderate the impact of consequences on blame and praise

Jenifer Z. Siegel, Molly J.Crockett, and Raymond J. Dolan
Volume 167, October 2017, Pages 201-211


Moral psychology research has highlighted several factors critical for evaluating the morality of another’s choice, including the detection of norm-violating outcomes, the extent to which an agent caused an outcome, and the extent to which the agent intended good or bad consequences, as inferred from observing their decisions. However, person-centered accounts of moral judgment suggest that a motivation to infer the moral character of others can itself impact on an evaluation of their choices. Building on this person-centered account, we examine whether inferences about agents’ moral character shape the sensitivity of moral judgments to the consequences of agents’ choices, and agents’ role in the causation of those consequences. Participants observed and judged sequences of decisions made by agents who were either bad or good, where each decision entailed a trade-off between personal profit and pain for an anonymous victim. Across trials we manipulated the magnitude of profit and pain resulting from the agent’s decision (consequences), and whether the outcome was caused via action or inaction (causation). Consistent with previous findings, we found that moral judgments were sensitive to consequences and causation. Furthermore, we show that the inferred character of an agent moderated the extent to which people were sensitive to consequences in their moral judgments. Specifically, participants were more sensitive to the magnitude of consequences in judgments of bad agents’ choices relative to good agents’ choices. We discuss and interpret these findings within a theoretical framework that views moral judgment as a dynamic process at the intersection of attention and social cognition.

The article is here.

Monday, August 14, 2017

AI Is Inventing Languages Humans Can’t Understand. Should We Stop It?

Mark Wilson
Originally posted July 14, 2017

Here is an excerpt:

But how could any of this technology actually benefit the world, beyond these theoretical discussions? Would our servers be able to operate more efficiently with bots speaking to one another in shorthand? Could microsecond processes, like algorithmic trading, see some reasonable increase? Chatting with Facebook, and various experts, I couldn’t get a firm answer.

However, as paradoxical as this might sound, we might see big gains in such software better understanding our intent. While two computers speaking their own language might be more opaque, an algorithm predisposed to learn new languages might chew through strange new data we feed it more effectively. For example, one researcher recently tried to teach a neural net to create new colors and name them. It was terrible at it, generating names like Sudden Pine and Clear Paste (that clear paste, by the way, was labeled on a light green). But then they made a simple change to the data they were feeding the machine to train it. They made everything lowercase–because lowercase and uppercase letters were confusing it. Suddenly, the color-creating AI was working, well, pretty well! And for whatever reason, it preferred, and performed better, with RGB values as opposed to other numerical color codes.

Why did these simple data changes matter? Basically, the researcher did a better job at speaking the computer’s language. As one coder put it to me, “Getting the data into a format that makes sense for machine learning is a huge undertaking right now and is more art than science. English is a very convoluted and complicated language and not at all amicable for machine learning.”

The article is here.

Moral alchemy: How love changes norms

Rachel W. Magid and Laura E.Schulz
Volume 167, October 2017, Pages 135-150


We discuss a process by which non-moral concerns (that is concerns agreed to be non-moral within a particular cultural context) can take on moral content. We refer to this phenomenon as moral alchemy and suggest that it arises because moral obligations of care entail recursively valuing loved ones’ values, thus allowing propositions with no moral weight in themselves to become morally charged. Within this framework, we predict that when people believe a loved one cares about a behavior more than they do themselves, the moral imperative to care about the loved one’s interests will raise the value of that behavior, such that people will be more likely to infer that third parties will see the behavior as wrong (Experiment 1) and the behavior itself as more morally important (Experiment 2) than when the same behaviors are considered outside the context of a caring relationship. The current study confirmed these predictions.

The article is here.

Sunday, August 13, 2017

Ethical and legal considerations in psychobiography

Jason D Reynolds and Taewon Choi
American Psychologist 2017 Jul-Aug;72(5):446-458


Despite psychobiography's long-standing history in the field of psychology, there has been relatively little discussion of ethical issues and guidelines in psychobiographical research. The Ethics Code of the American Psychological Association (APA) does not address psychobiography. The present article highlights the value of psychobiography to psychology, reviews the history and current status of psychobiography in the field, examines the relevance of existing APA General Principles and Ethical Standards to psychobiographical research, and introduces a best practice ethical decision-making model to assist psychologists working in psychobiography. Given the potential impact of psychologists' evaluative judgments on other professionals and the lay public, it is emphasized that psychologists and other mental health professionals have a high standard of ethical vigilance in conducting and reporting psychobiography.

The article is here.

Saturday, August 12, 2017

Reminder: the Trump International Hotel is still an ethics disaster

Carly Sitrin
Originally published August 8, 2017

The Trump International Hotel in Washington, DC, has been serving as a White House extension since Donald Trump took office, and experts think this violates several governmental ethics rules.

The Washington Post reported Monday that the Trump International Hotel has played host to countless foreign dignitaries, Republican lawmakers, and powerful actors hoping to hold court with Trump appointees or even the president himself.

Since visitation records at the Trump International Hotel are not made public, the Post sent reporters to the hotel every day in May to try to identify people and organizations using the facilities.

What they found was a revolving door of powerful people holding galas in the hotel’s lavish ballrooms and meeting over expensive cocktails with White House staff at the bar.

They included Rep. Dana Rohrabacher (R-CA), whom Politico recently called "Putin’s favorite congressman”; Rep. Bill Shuster (R-PA), who chairs the General Services Administration, the Trump hotel's landlord; and nine other Republican Congress members who all hosted events at the hotel, according to campaign spending disclosures obtained by the Post. Additionally, foreign visitors such as business groups promoting Turkish-American relations and the Romanian President Klaus Iohannis and his wife also rented out rooms.

The article is here.

Friday, August 11, 2017

What an artificial intelligence researcher fears about AI

Arend Hintze
Originally published July 14, 2017

Here is an excerpt:

Fear of the nightmare scenario

There is one last fear, embodied by HAL 9000, the Terminator and any number of other fictional superintelligences: If AI keeps improving until it surpasses human intelligence, will a superintelligence system (or more than one of them) find it no longer needs humans? How will we justify our existence in the face of a superintelligence that can do things humans could never do? Can we avoid being wiped off the face of the Earth by machines we helped create?

The key question in this scenario is: Why should a superintelligence keep us around?

I would argue that I am a good person who might have even helped to bring about the superintelligence itself. I would appeal to the compassion and empathy that the superintelligence has to keep me, a compassionate and empathetic person, alive. I would also argue that diversity has a value all in itself, and that the universe is so ridiculously large that humankind's existence in it probably doesn't matter at all.

But I do not speak for all humankind, and I find it hard to make a compelling argument for all of us. When I take a sharp look at us all together, there is a lot wrong: We hate each other. We wage war on each other. We do not distribute food, knowledge or medical aid equally. We pollute the planet. There are many good things in the world, but all the bad weakens our argument for being allowed to exist.

Fortunately, we need not justify our existence quite yet. We have some time – somewhere between 50 and 250 years, depending on how fast AI develops. As a species we can come together and come up with a good answer for why a superintelligence shouldn't just wipe us out. But that will be hard: Saying we embrace diversity and actually doing it are two different things – as are saying we want to save the planet and successfully doing so.

The article is here.

The real problem (of consciousness)

Anil K Seth
Originally posted November 2, 2016

Here is an excerpt:

The classical view of perception is that the brain processes sensory information in a bottom-up or ‘outside-in’ direction: sensory signals enter through receptors (for example, the retina) and then progress deeper into the brain, with each stage recruiting increasingly sophisticated and abstract processing. In this view, the perceptual ‘heavy-lifting’ is done by these bottom-up connections. The Helmholtzian view inverts this framework, proposing that signals flowing into the brain from the outside world convey only prediction errors – the differences between what the brain expects and what it receives. Perceptual content is carried by perceptual predictions flowing in the opposite (top-down) direction, from deep inside the brain out towards the sensory surfaces. Perception involves the minimisation of prediction error simultaneously across many levels of processing within the brain’s sensory systems, by continuously updating the brain’s predictions. In this view, which is often called ‘predictive coding’ or ‘predictive processing’, perception is a controlled hallucination, in which the brain’s hypotheses are continually reined in by sensory signals arriving from the world and the body. ‘A fantasy that coincides with reality,’ as the psychologist Chris Frith eloquently put it in Making Up the Mind (2007).

Armed with this theory of perception, we can return to consciousness. Now, instead of asking which brain regions correlate with conscious (versus unconscious) perception, we can ask: which aspects of predictive perception go along with consciousness? A number of experiments are now indicating that consciousness depends more on perceptual predictions, than on prediction errors. In 2001, Alvaro Pascual-Leone and Vincent Walsh at Harvard Medical School asked people to report the perceived direction of movement of clouds of drifting dots (so-called ‘random dot kinematograms’). They used TMS to specifically interrupt top-down signalling across the visual cortex, and they found that this abolished conscious perception of the motion, even though bottom-up signals were left intact.

The article is here.

Thursday, August 10, 2017

Predatory Journals Hit By ‘Star Wars’ Sting

By Neuroskeptic
Originally published July 19, 2017

A number of so-called scientific journals have accepted a Star Wars-themed spoof paper. The manuscript is an absurd mess of factual errors, plagiarism and movie quotes. I know because I wrote it.

Inspired by previous publishing “stings”, I wanted to test whether ‘predatory‘ journals would publish an obviously absurd paper. So I created a spoof manuscript about “midi-chlorians” – the fictional entities which live inside cells and give Jedi their powers in Star Wars. I filled it with other references to the galaxy far, far away, and submitted it to nine journals under the names of Dr Lucas McGeorge and Dr Annette Kin.

Four journals fell for the sting. The American Journal of Medical and Biological Research (SciEP) accepted the paper, but asked for a $360 fee, which I didn’t pay. Amazingly, three other journals not only accepted but actually published the spoof. Here’s the paper from the International Journal of Molecular Biology: Open Access (MedCrave), Austin Journal of Pharmacology and Therapeutics (Austin) and American Research Journal of Biosciences (ARJ) I hadn’t expected this, as all those journals charge publication fees, but I never paid them a penny.

The blog post is here.

Wednesday, August 9, 2017

Career of the Future: Robot Psychologist

Christopher Mims
The Wall Street Journal
Originally published July 9, 2017

Artificial-intelligence engineers have a problem: They often don’t know what their creations are thinking.

As artificial intelligence grows in complexity and prevalence, it also grows more powerful. AI already has factored into decisions about who goes to jail and who receives a loan. There are suggestions AI should determine who gets the best chance to live when a self-driving car faces an unavoidable crash.

Defining AI is slippery and growing more so, as startups slather the buzzword over whatever they are doing. It is generally accepted as any attempt to ape human intelligence and abilities.

One subset that has taken off is neural networks, systems that “learn” as humans do through training, turning experience into networks of simulated neurons. The result isn’t code, but an unreadable, tangled mass of millions—in some cases billions—of artificial neurons, which explains why those who create modern AIs can be befuddled as to how they solve tasks.

Most researchers agree the challenge of understanding AI is pressing. If we don’t know how an artificial mind works, how can we ascertain its biases or predict its mistakes?

We won’t know in advance if an AI is racist, or what unexpected thought patterns it might have that would make it crash an autonomous vehicle. We might not know about an AI’s biases until long after it has made countless decisions. It’s important to know when an AI will fail or behave unexpectedly—when it might tell us, “I’m sorry, Dave. I’m afraid I can’t do that.”

“A big problem is people treat AI or machine learning as being very neutral,” said Tracy Chou, a software engineer who worked with machine learning at Pinterest Inc. “And a lot of that is people not understanding that it’s humans who design these models and humans who choose the data they are trained on.”

The article is here.

Tuesday, August 8, 2017

The next big corporate trend? Actually having ethics.

Patrick Quinlan
Originally published July 20, 2017

Here is an excerpt:

Slowly, brands are waking up to the fact that strong ethics and core values are no longer a “nice to have,” but a necessity. Failure to take responsibility in times of crisis can take an irreparable toll on the trust companies have worked so hard to build with employees, partners and customers. So many brands are still getting it wrong, and the consequences are real — public boycotting, massive fines, fired CEOs and falling stock prices.

This shift is what I call ethical transformation — the application of ethics and values across all aspects of business and society. It’s as impactful and critical as digital transformation, the other megatrend of the last 20 years. You can’t have one without the other. The internet stripped away barriers between consumers and brands, meaning that transparency and attention to ethics and values is at an all-time high. Brands have to get on board, now. Consider some oft-cited casualties of the digital transformation: Blockbuster, Kodak and Sears. That same fate awaits companies that can’t or won’t prioritize ethics and values.

This is a good thing. Ethical transformation pushes us into a better future, one built on genuinely ethical companies. But it’s not easy. In fact, it’s pretty hard. And it takes time. For decades, most of the business world focused on what not to do or how not to get fined. (In a word: Compliance.) Every so often, ethics and its even murkier brother “values” got a little love as an afterthought. Brands that did focus on values and ethics were considered exceptions to the rule — the USAAs and Toms shoes of the world. No longer.

The article is here.

Monday, August 7, 2017

Study suggests why more skin in the game won't fix Medicaid

Don Sapatkin
Originally posted July 19, 2017

Here is an excerpt:

Previous studies have found that increasing cost-sharing causes consumers to skip medical care somewhat indiscriminately. The Dutch research was the first to examine the impact of cost-sharing changes on specialty mental health-care, the authors wrote.

Jalpa A. Doshi, a researcher at the University of Pennsylvania’s Leonard Davis Institute of Health Economics, has examined how Americans with commercial insurance respond to cost-sharing for antidepressants.

“Because Medicaid is the largest insurer of low-income individuals with serious mental illnesses such as schizophrenia and bipolar disorder in the United States, lawmakers should be cautious on whether an increase in cost sharing for such a vulnerable group may be a penny-wise, pound-foolish policy,” Doshi said in an email after reading the new study.

Michael Brody, president and CEO of Mental Health Partnerships, formerly the Mental Health Association of Southeastern Pennsylvania, had an even stronger reaction about the possible implications for Medicaid patients.

The article is here.

Attributing Agency to Automated Systems: Reflectionson Human–Robot Collaborations and Responsibility-Loci

Sven Nyholm
Science and Engineering Ethics
pp 1–19

Many ethicists writing about automated systems (e.g. self-driving cars and autonomous weapons systems) attribute agency to these systems. Not only that; they seemingly attribute an autonomous or independent form of agency to these machines. This leads some ethicists to worry about responsibility-gaps and retribution-gaps in cases where automated systems harm or kill human beings. In this paper, I consider what sorts of agency it makes sense to attribute to most current forms of automated systems, in particular automated cars and military robots. I argue that whereas it indeed makes sense to attribute different forms of fairly sophisticated agency to these machines, we ought not to regard them as acting on their own, independently of any human beings. Rather, the right way to understand the agency exercised by these machines is in terms of human–robot collaborations, where the humans involved initiate, supervise, and manage the agency of their robotic collaborators. This means, I argue, that there is much less room for justified worries about responsibility-gaps and retribution-gaps than many ethicists think.

The article is here.

Sunday, August 6, 2017

An erosion of ethics oversight should make us all more cynical about Trump

The Editorial Board
The Los Angeles Times
Originally published August 4, 2017

President Trump’s problems with ethics are manifest, from his refusal to make public his tax returns to the conflicts posed by his continued stake in the Trump Organization and its properties around the world — including the Trump International Hotel just down the street from the White House, in a building leased from the federal government he’s now in charge of. The president’s stubborn refusal to hew to the ethical norms set by his predecessors has left the nation to rightfully question whose best interests are foremost in his mind.

Some of the more persistent challenges to the Trump administration’s comportment have come from the Office of Government Ethics, whose recently departed director, Walter M. Shaub Jr., fought with the administration frequently over federal conflict-of-interest regulations. Under agency rules, chief of staff Shelley K. Finlayson should have been Shaub’s successor until the president nominated a new director, who would need Senate confirmation.

But Trump upended that transition last month by naming the office’s general counsel, David J. Apol, as the interim director. Apol has a reputation within the agency for taking contrarian — and usually more lenient — stances on ethics requirements than did Shaub and the consensus opinion of the staff (including Finlayson). And that, of course, raises the question of whether the White House replaced Finlayson with Apol in hopes of having a more conciliatory ethics chief without enduring a grueling nomination fight.

The article is here.

Saturday, August 5, 2017

Empathy makes us immoral

Olivia Goldhill
Originally published July 9, 2017

Empathy, in general, has an excellent reputation. But it leads us to make terrible decisions, according to Paul Bloom, psychology professor at Yale and author of Against Empathy: The Case for Rational Compassion. In fact, he argues, we would be far more moral if we had no empathy at all.

Though it sounds counterintuitive, Bloom makes a convincing case. First, he makes a point of defining empathy as putting yourself in the shoes of other people—“feeling their pain, seeing the world through their eyes.” When we rely on empathy to make moral decisions, he says, we end up prioritizing the person whose suffering we can easily relate to over that of any number of others who seem more distant. Indeed, studies have shown that empathy does encourage irrational moral decisions that favor one individual over the masses.

“When we rely on empathy, we think that a little girl stuck down a well is more important than all of climate change, is more important than tens of thousands of people dying in a far away country,” says Bloom. “Empathy zooms us in on the attractive, on the young, on people of the same race. It zooms us in on the one rather than the many. And so it distorts our priorities.”

The article is here.

Friday, August 4, 2017

Moral distress in physicians and nurses: Impact on professional quality of life and turnover

Austin, Cindy L.; Saylor, Robert; Finley, Phillip J.
Psychological Trauma: Theory, Research, Practice, and Policy, Vol 9(4), Jul 2017, 399-406.


Objective: The purpose of this study was to investigate moral distress (MD) and turnover intent as related to professional quality of life in physicians and nurses at a tertiary care hospital.

Method: Health care providers from a variety of hospital departments anonymously completed 2 validated questionnaires (Moral Distress Scale–Revised and Professional Quality of Life Scale). Compassion fatigue (as measured by secondary traumatic stress [STS] and burnout [BRN]) and compassion satisfaction are subscales which make up one’s professional quality of life. Relationships between these constructs and clinicians’ years in health care, critical care patient load, and professional discipline were explored.

Results: The findings (n = 329) demonstrated significant correlations between STS, BRN, and MD. Scores associated with intentions to leave or stay in a position were indicative of high verses low MD. We report highest scoring situations of MD as well as when physicians and nurses demonstrate to be most at risk for STS, BRN and MD. Both physicians and nurses identified the events contributing to the highest level of MD as being compelled to provide care that seems ineffective and working with a critical care patient load >50%.

Conclusion: The results from this study of physicians and nurses suggest that the presence of MD significantly impacts turnover intent and professional quality of life. Therefore implementation of emotional wellness activities (e.g., empowerment, opportunity for open dialog regarding ethical dilemmas, policy making involvement) coupled with ongoing monitoring and routine assessment of these maladaptive characteristics is warranted.

The article is here.

Re: Nudges in a Post-truth World

Guest Post: Nathan Hodson
Journal of Medical Ethics Blog
Originally posted July 19, 2017

Here is an excerpt:

As Levy notes, some people are concerned that nudges present a threat to autonomy. Attempts at reconciling nudges with ethics, then, are important because nudging in healthcare is here to stay but we need to ensure it is used in ways that respect autonomy (and other moral principles).

The term “nudge” is perhaps a misnomer. To fill out the concept a bit, it commonly denotes the use of behavioural economics and behavioural psychology to the construction of choice architecture through carefully designed trials. But every choice we face, in any context, already comes with a choice architecture: there are endless contextual factors that impact the decisions we make.

When we ask whether nudging is acceptable we are asking whether an arbitrary or random choice architecture is more acceptable than a deliberate choice architecture, or whether an uninformed choice architecture is better than one informed by research.

In fact the permissibility of a nudge derives from whether it is being used in an ethically acceptable way, something that can only be explored on an individual basis. Thaler and Sunstein locate ethical acceptability in promoting the health of the person being nudged (and call this Libertarian Paternalism — i.e. sensible choices are promoted but no option is foreclosed). An alternative approach was proposed by Mitchell: nudges are justified if they maximise future liberty. Either way the nudging itself is not inherently problematic.

The article is here.

Thursday, August 3, 2017

The Trouble With Sex Robots

By Laura Bates
The New York Times
Originally posted

Here is an excerpt:

One of the authors of the Foundation for Responsible Robotics report, Noel Sharkey, a professor of artificial intelligence and robotics at the University of Sheffield, England, said there are ethical arguments within the field about sex robots with “frigid” settings.

“The idea is robots would resist your sexual advances so that you could rape them,” Professor Sharkey said. “Some people say it’s better they rape robots than rape real people. There are other people saying this would just encourage rapists more.”

Like the argument that women-only train compartments are an answer to sexual harassment and assault, the notion that sex robots could reduce rape is deeply flawed. It suggests that male violence against women is innate and inevitable, and can be only mitigated, not prevented. This is not only insulting to a vast majority of men, but it also entirely shifts responsibility for dealing with these crimes onto their victims — women, and society at large — while creating impunity for perpetrators.

Rape is not an act of sexual passion. It is a violent crime. We should no more be encouraging rapists to find a supposedly safe outlet for it than we should facilitate murderers by giving them realistic, blood-spurting dummies to stab. Since that suggestion sounds ridiculous, why does the idea of providing sexual abusers with lifelike robotic victims sound feasible to some?

The article is here.

The Wellsprings of Our Morality

Daniel M.T. Fessler
What can evolution tell us about morality?

Mother Nature is amoral, yet morality is universal. The natural world lacks both any guiding hand and any moral compass. And yet all human societies have moral rules, and, with the exception of some individuals suffering from pathology, all people experience profound feelings that shape their actions in light of such rules. Where then did these constellations of rules and feelings come from?

The term “morality” jumbles rules and feelings, as well as judgments of others’ actions that result from the intersection of rules and feelings. Rules, like other features of culture, are ideas transmitted from person to person: “It is laudable to do X,” “It is a sin to do Y,” etc. Feelings are internal states evoked by events, or by thoughts of future possibilities: “I am proud that she did X,” “I am outraged that he did Y,” and so on. Praise or condemnation are social acts, often motivated by feelings, in response to other people’s behavior. All of this is commonly called “morality.”

So, what does it mean to say that morality is universal? You don’t need to be an anthropologist to recognize that, while people everywhere experience strong feelings about others’ behavior—and, as a result, reward or punish that behavior—cultures differ with regard to the beliefs on which they base such judgments. Is injustice a graver sin than disrespect for tradition? Which is more important, the autonomy of the individual or the harmony of the group? The answer is that it depends on whom you ask.

The information is here.

Wednesday, August 2, 2017

Ships in the Rising Sea? Changes Over Time in Psychologists’ Ethical Beliefs and Behaviors

Rebecca A. Schwartz-Mette & David S. Shen-Miller
Ethics & Behavior 


Beliefs about the importance of ethical behavior to competent practice have prompted major shifts in psychology ethics over time. Yet few studies examine ethical beliefs and behavior after training, and most comprehensive research is now 30 years old. As such, it is unclear whether shifts in the field have resulted in general improvements in ethical practice: Are we psychologists “ships in the rising sea,” lifted by changes in ethical codes and training over time? Participants (N = 325) completed a survey of ethical beliefs and behaviors (Pope, Tabachnick, & Keith-Spiegel, 1987). Analyses examined group differences, consistency of frequency and ethicality ratings, and comparisons with past data. More than half of behaviors were rated as less ethical and occurring less frequently than in 1987, with early career psychologists generally reporting less ethically questionable behavior. Recommendations for enhancing ethics education are discussed.

The article is here.

A Primatological Perspective on Evolution and Morality

Sarah F. Brosnan
What can evolution tell us about morality?

Morality is a key feature of humanity, but how did we become a moral species? And is morality a uniquely human phenomenon, or do we see its roots in other species? One of the most fun parts of my research is studying the evolutionary basis of behaviors that we think of as quintessentially human, such as morality, to try to understand where they came from and what purpose they serve. In so doing, we can not only better understand why people behave the way that they do, but we also may be able to develop interventions that promote more beneficial decision-making.

Of course, a “quintessentially human” behavior is not replicated, at least in its entirety, in another species, so how does one study the evolutionary history of such behaviors? To do so, we focus on precursor behaviors that are related to the one in question and provide insight into the evolution of the target behavior. A precursor behavior may look very different from the final instantiation; for instance, birds’ wings appear to have originated as feathers that were used for either insulation or advertisement (i.e., sexual selection) that, through a series of intermediate forms, evolved into feathered wings. The chemical definition may be even more apt; a precursor molecule is one that triggers a reaction, resulting in a chemical that is fundamentally different from the initial chemicals used in the reaction.

How is this related to morality? We would not expect to see human morality in other species, as morality implies the ability to debate ethics and develop group rules and norms, which is not possible in non-verbal species. However, complex traits like morality do not arise de novo; like wings, they evolve from existing traits. Therefore, we can look for potential precursors in other species in order to better understand the evolutionary history of morality.

The information is here.

Tuesday, August 1, 2017

Morality isn’t a compass — it’s a calculator

DB Krupp
The Conversation
Originally published July 9, 2017

Here is the conclusion:

Unfortunately, the beliefs that straddle moral fault lines are largely impervious to empirical critique. We simply embrace the evidence that supports our cause and deny the evidence that doesn’t. If strategic thinking motivates belief, and belief motivates reason, then we may be wasting our time trying to persuade the opposition to change their minds.

Instead, we should strive to change the costs and benefits that provoke discord in the first place. Many disagreements are the result of worlds colliding — people with different backgrounds making different assessments of the same situation. By closing the gap between their experiences and by lowering the stakes, we can bring them closer to consensus. This may mean reducing inequality, improving access to health care or increasing contact between unfamiliar groups.

We have little reason to see ourselves as unbiased sources of moral righteousness, but we probably will anyway. The least we can do is minimize that bias a bit.

The article is here.

Henderson psychologist charged with murder can reopen practice

David Ferrara
Las Vegas Review-Journal
Originally posted July 14, 2017

A psychologist accused of killing his wife and staging her death as a suicide can start practicing medicine again in less than four months, the Nevada Board of Psychological Examiners decided Friday.

Suspected of abusing drugs and obtaining prescription drugs from patients, Gregory “Brent” Dennis, who prosecutors say poisoned attorney Susan Winters inside their Henderson home, also must undergo up to seven years of drug treatment, the seven-member panel ruled as they signed a settlement agreement that made no mention of the murder charge.

“It’s clear that the board members do not know what Brent Dennis was arrested for,” Keith Williams, a lawyer for the Winters family, told a Las Vegas Review-Journal reporter after the meeting. “We’re confident that they did not know what they were voting on today.”

Henderson police arrested Dennis on the murder charge in February.

The article is here.

Monday, July 31, 2017

Truth or Punishment: Secrecy and Punishing the Self

Michael L. Slepian and Brock Bastian
Personality and Social Psychology Bulletin
First Published July 14, 2017, 1–17


We live in a world that values justice; when a crime is committed, just punishment is expected to follow. Keeping one’s misdeed secret therefore appears to be a strategic way to avoid (just) consequences. Yet, people may engage in self-punishment to right their own wrongs to balance their personal sense of justice. Thus, those who seek an escape from justice by keeping secrets may in fact end up serving that same justice on themselves (through self-punishment). Six studies demonstrate that thinking about secret (vs. confessed) misdeeds leads to increased self-punishment (increased denial of pleasure and seeking of pain). These effects were mediated by the feeling one deserved to be punished, moderated by the significance of the secret, and were observed for both self-reported and behavioral measures of self-punishment.

Here is an excerpt:

Recent work suggests, however, that people who are reminded of their own misdeeds will sometimes seek out their own justice. That is, even subtle acts of self-punishment can restore a sense of personal justice, whereby a wrong feels to have been righted (Bastian et al., 2011; Inbar et al., 2013). Thus,
we predicted that even though keeping a misdeed secret could lead one to avoid being punished by others, it still could prompt a desire for punishment all the same, one inflicted by the self.

The article is here.

Note: There are significant implications in this article for psychotherapists.

Is it dangerous to recreate flawed human morality in machines?

Alexandra Myers-Lewis
Originally published July 13, 2017

Here are two excerpts:

The need for ethical machines may be one of the defining issues of our time. Algorithms are created to govern critical systems in our society, from banking to medicine, but with no concept of right and wrong, machines cannot understand the repercussions of their actions. A machine has never thrown a punch in a schoolyard fight, cheated on a test or a relationship, or been rapt with the special kind of self-doubt that funds our cosmetic and pharmaceutical industries. Simply put, an ethical machine will always be an it - but how can it be more?


A self-driving car wouldn't just have to make decisions in life-and-death situations - as if that wasn't enough - but would also need to judge how much risk is acceptable at any given time. But who will ultimately restrict this decision-making process? Would it be the job of the engineer to determine which circumstances it is acceptable to overtake a cyclist? You won't lose sleep pegging a deer over a goat. But a person? Choosing who potentially lives and dies based on a number has an inescapable air of dystopia. You may see tight street corners and hear the groan of oncoming traffic, but an algorithm will only see the world in numbers. These numbers will form its memories and its reason, the force that moves the car out into the road.

"I think people will be very uncomfortable with the idea of a machine deciding between life and death," Sütfeld says, "In this regard we believe that transparency and comprehensibility could be a very important factor to gain public acceptance of these systems. Or put another way, people may favour a transparent and comprehensible system over a more complex black-box system. We would hope that the people will understand this general necessity of a moral compass and that the discussion will be about what approach to take, and how such systems should decide. If this is put in, every car will make the same decision and if there is a good common ground in terms of model, this could improve public safety."

The article is here.

Sunday, July 30, 2017

Should we be afraid of AI?

Luciano Floridi
Originally published

Here is an excerpt:

True AI is not logically impossible, but it is utterly implausible. We have no idea how we might begin to engineer it, not least because we have very little understanding of how our own brains and intelligence work. This means that we should not lose sleep over the possible appearance of some ultraintelligence. What really matters is that the increasing presence of ever-smarter technologies is having huge effects on how we conceive of ourselves, the world, and our interactions. The point is not that our machines are conscious, or intelligent, or able to know something as we do. They are not. There are plenty of well-known results that indicate the limits of computation, so-called undecidable problems for which it can be proved that it is impossible to construct an algorithm that always leads to a correct yes-or-no answer.

We know, for example, that our computational machines satisfy the Curry-Howard correspondence, which indicates that proof systems in logic on the one hand and the models of computation on the other, are in fact structurally the same kind of objects, and so any logical limit applies to computers as well. Plenty of machines can do amazing things, including playing checkers, chess and Go and the quiz show Jeopardy better than us. And yet they are all versions of a Turing Machine, an abstract model that sets the limits of what can be done by a computer through its mathematical logic.

Quantum computers are constrained by the same limits, the limits of what can be computed (so-called computable functions). No conscious, intelligent entity is going to emerge from a Turing Machine. The point is that our smart technologies – also thanks to the enormous amount of available data and some very sophisticated programming – are increasingly able to deal with more tasks better than we do, including predicting our behaviours. So we are not the only agents able to perform tasks successfully.

Engineering Eden: The quest for eternal life

Kristin Kostick
Baylor College of Medicine
Originally posted June 2,2017

If you’re like most people, you may associate the phrase “eternal life” with religion: The promise that we can live forever if we just believe in God. You probably don’t associate the phrase with an image of scientists working in a lab, peering at worms through microscopes or mice skittering through boxes. But you should.

The quest for eternal life has only recently begun to step out from behind the pews and into the petri dish.

I recently discussed the increasing feasibility of the transhumanist vision due to continuing advancements in biotech, gene- and cell-therapies. These emerging technologies, however, don’t erase the fact that religion – not science – has always been our salve for confronting death’s inevitability. For believers, religion provides an enduring mechanism (belief and virtue) behind the perpetuity of existence, and shushes our otherwise frantic inability to grasp: How can I, as a person, just end?

The Mormon transhumanist Lincoln Cannon argues that science, rather than religion, offers a tangible solution to this most basic existential dilemma. He points out that it is no longer tenable to believe in eternal life as only available in heaven, requiring the death of our earthly bodies before becoming eternal, celestial beings.

Would a rational person choose to believe in an uncertain, spiritual afterlife over the tangible persistence of one’s own familiar body and the comforting security of relationships we’ve fostered over a lifetime of meaningful interactions?

The article is here.

Saturday, July 29, 2017

On ethics, Trump is leading America in the wrong direction

Jeffrey D. Sachs
Originally published July 26, 2017

Here is an excerpt:

So here we are. Bribes are no longer bribes, campaign funds from corporations are free speech, and the politicians are just being good public servants when they accept money from those who seek their favor. Crooked politicians are thrilled; the rest of us look on shocked at the pageantry of cynicism and immorality. Senior officials in law-abiding countries have told me they can hardly believe their eyes as to what is underway in the United States.

Which brings us to Donald Trump. Trump seems to know no limits whatsoever in his commingling of the public interest and his personal business interests. He failed to give up his ownership interest in his businesses upon taking office. (Trump resigned from positions in his companies and said his two sons are in charge.)

Government and Republican Party activities have been booked into Trump properties. Trump campaign funds are used to hire lawyers to defend Donald Trump Jr. in the Russia probe. Campaign associates such as Paul Manafort and Michael Flynn have been under scrutiny for their business dealings with clients tied to foreign governments.

In response to the stench, the former head of the government ethics office recently resigned, declaring that the United States is "pretty close to a laughingstock at this point." The resignation was not remarkable under the circumstances. What was remarkable is that most Republicans politicians remain mum to these abuses. Of course too many politicians of both parties are deeply compromised by financial dependence on corporate campaign donors.

The article is here.

Trump Has Plunged Nation Into ‘Ethics Crisis,’ Ex-Watchdog Says

Britain Eakin
Courthouse News Service
Originally published July 28, 2017

The government’s former top ethics chief sounded the alarm Friday, saying the first eight months of the Trump administration have been “an absolute shock to the system” that has plunged the nation into “an ethics crisis.”

Walter Shaub Jr. resigned July 6 after months of clashes with the White House over issues such as President Trump’s refusal to divest his businesses and the administration’s delay in disclosing ethics waivers for appointees.

As he left office he told NPR that “the current situation has made it clear that the ethics program needs to be stronger than it is.”

He did not elaborate at that time on what about the “situation” so troubled him, but he said at the Campaign Legal Center, he would have more freedom “to push for reform” while broadening his focus to ethics issues at all levels of government.

During a talk at the National Press Club Friday morning, Shaub said the president and other administration officials have departed from ethical principles and norms as part of a broader assault on the American representative form of government.

Shaub said he is “extremely concerned” by this.

“The biggest concern is that norms evolve. So if we have a shock to the system, what we’re experiencing now could become the new norm,” Shaub said.

The article is here.

Ethics and Governance AI Fund funnels $7.6M to Harvard, MIT and independent research efforts

Devin Coldewey
Tech Crunch
Originally posted July 11, 2017

A $27 million fund aimed at applying artificial intelligence to the public interest has announced the first targets for its beneficence: $7.6 million will be split unequally between MIT’s Media Lab, Harvard’s Berkman Klein Center and seven smaller research efforts around the world.

The Ethics and Governance of Artificial Intelligence Fund was created by Reid Hoffman, Pierre Omidyar and the Knight Foundation back in January; the intention was to ensure that “social scientists, ethicists, philosophers, faith leaders, economists, lawyers and policymakers” have a say in how AI is developed and deployed.

To that end, this first round of fundings supports existing organizations working along those lines, as well as nurturing some newer ones.

The lion’s share of this initial round, $5.9 million, will be split by MIT and Harvard, as the initial announcement indicated. Media Lab is, of course, on the cutting edge of many research efforts in AI and elsewhere; Berkman Klein focuses more on the legal and analysis side of things.

The fund’s focuses are threefold:

  • Media and information quality – looking at how to understand and control the effects of autonomous information systems and “influential algorithms” like Facebook’s news feed.
  • Social and criminal justice – perhaps the area where the bad influence of AI-type systems could be the most insidious; biases in data and interpretation could be baked into investigative and legal systems, giving them the illusion of objectivity. (Obviously the fund seeks to avoid this.)
  • Autonomous cars – although this may seem incongruous with the others, self-driving cars represent an immense social opportunity. Mobility is one of the most influential social-economic factors, and its reinvention offers a chance to improve the condition of nearly everyone on the planet — great potential for both advancement and abuse.

Friday, July 28, 2017

You are fair, but I expect you to also behave unfairly

Positive asymmetry in trait-behavior relations for moderate morality information

Patrice Rusconi, Simona Sacchi, Roberta Capellini, Marco Brambilla, Paolo Cherubini
Published: July 11, 2017

Summary: People who are believed to be immoral are unable to reverse individuals' perception of them, potentially resulting in difficulties in the workplace and barriers in accessing fair and equal treatment in the legal system.


Trait inference in person perception is based on observers’ implicit assumptions about the relations between trait adjectives (e.g., fair) and the either consistent or inconsistent behaviors (e.g., having double standards) that an actor can manifest. This article presents new empirical data and theoretical interpretations on people’ behavioral expectations, that is, people’s perceived trait-behavior relations along the morality (versus competence) dimension. We specifically address the issue of the moderate levels of both traits and behaviors almost neglected by prior research by using a measure of the perceived general frequency of behaviors. A preliminary study identifies a set of competence- and morality-related traits and a subset of traits balanced for valence. Studies 1–2 show that moral target persons are associated with greater behavioral flexibility than immoral ones where abstract categories of behaviors are concerned. For example, participants judge it more likely that a fair person would behave unfairly than an unfair person would behave fairly. Study 3 replicates the results of the first 2 studies using concrete categories of behaviors (e.g., telling the truth/omitting some information). Study 4 shows that the positive asymmetry in morality-related trait-behavior relations holds for both North-American and European (i.e., Italian) individuals. A small-scale meta-analysis confirms the existence of a positive asymmetry in trait-behavior relations along both morality and competence dimensions for moderate levels of both traits and behaviors. We discuss these findings in relation to prior models and results on trait-behavior relations and we advance a motivational explanation based on self-protection.

The article is here.

Note: This research also applies to perceptions in psychotherapy and in family relationships.

I attend, therefore I am

Carolyn Dicey Jennings
Originally published July 10, 2017

Here is an excerpt:

Following such considerations, the philosopher Daniel Dennett proposed that the self is simply a ‘centre of narrative gravity’ – just as the centre of gravity in a physical object is not a part of that object, but a useful concept we use to understand the relationship between that object and its environment, the centre of narrative gravity in us is not a part of our bodies, a soul inside of us, but a useful concept we use to make sense of the relationship between our bodies, complete with their own goals and intentions, and our environment. So, you, you, are a construct, albeit a useful one. Or so goes Dennett’s thinking on the self.

And it isn’t just Dennett. The idea that there is a substantive self is passé. When cognitive scientists aim to provide an empirical account of the self, it is simply an account of our sense of self – why it is that we think we have a self. What we don’t find is an account of a self with independent powers, responsible for directing attention and resolving conflicts of will.

There are many reasons for this. One is that many scientists think that the evidence counts in favour of our experience in general being epiphenomenal – something that does not influence our brain, but is influenced by it. In this view, when you experience making a tough decision, for instance, that decision was already made by your brain, and your experience is mere shadow of that decision. So for the very situations in which we might think the self is most active – in resolving difficult decisions – everything is in fact already achieved by the brain.

The article is here.

Thursday, July 27, 2017

First Human Embryos Edited in U.S.

Steve Connor
MIT Technology News
Originally published July 26, 2017

The first known attempt at creating genetically modified human embryos in the United States has been carried out by a team of researchers in Portland, Oregon, Technology Review has learned.

The effort, led by Shoukhrat Mitalipov of Oregon Health and Science University, involved changing the DNA of a large number of one-cell embryos with the gene-editing technique CRISPR, according to people familiar with the scientific results.

Until now, American scientists have watched with a combination of awe, envy, and some alarm as scientists elsewhere were first to explore the controversial practice. To date, three previous reports of editing human embryos were all published by scientists in China.

Now Mitalipov is believed to have broken new ground both in the number of embryos experimented upon and by demonstrating that it is possible to safely and efficiently correct defective genes that cause inherited diseases.

Although none of the embryos were allowed to develop for more than a few days—and there was never any intention of implanting them into a womb—the experiments are a milestone on what may prove to be an inevitable journey toward the birth of the first genetically modified humans.

The article is here.

Psychiatry Group Tells Members They Can Ignore ‘Goldwater Rule’ and Comment on Trump’s Mental Health

Sharon Begley
Global Research
Originally published July 25, 2017

A leading psychiatry group has told its members they should not feel bound by a longstanding rule against commenting publicly on the mental state of public figures — even the president.

The statement, an email this month from the executive committee of the American Psychoanalytic Association to its 3,500 members, represents the first significant crack in the profession’s decades-old united front aimed at preventing experts from discussing the psychiatric aspects of politicians’ behavior. It will likely make many of its members feel more comfortable speaking openly about President Trump’s mental health.

The impetus for the email was “belief in the value of psychoanalytic knowledge in explaining human behavior,” said psychoanalytic association past president Dr. Prudence Gourguechon, a psychiatrist in Chicago.

“We don’t want to prohibit our members from using their knowledge responsibly.”

That responsibility is especially great today, she told STAT, “since Trump’s behavior is so different from anything we’ve seen before” in a commander in chief.

An increasing number of psychologists and psychiatrists have denounced the restriction as a “gag rule” and flouted it, with some arguing they have a “duty to warn” the public about what they see as Trump’s narcissism, impulsivity, poor attention span, paranoia, and other traits that, they believe, impair his ability to lead.

The article is here.

Wednesday, July 26, 2017

Everybody lies: how Google search reveals our darkest secrets

Seth Stephens-Davidowitz
The Guardian
Originally published July 9, 2017

Everybody lies. People lie about how many drinks they had on the way home. They lie about how often they go to the gym, how much those new shoes cost, whether they read that book. They call in sick when they’re not. They say they’ll be in touch when they won’t. They say it’s not about you when it is. They say they love you when they don’t. They say they’re happy while in the dumps. They say they like women when they really like men. People lie to friends. They lie to bosses. They lie to kids. They lie to parents. They lie to doctors. They lie to husbands. They lie to wives. They lie to themselves. And they damn sure lie to surveys. Here’s my brief survey for you:

Have you ever cheated in an exam?

Have you ever fantasised about killing someone?

Were you tempted to lie?

Many people underreport embarrassing behaviours and thoughts on surveys. They want to look good, even though most surveys are anonymous. This is called social desirability bias. An important paper in 1950 provided powerful evidence of how surveys can fall victim to such bias. Researchers collected data, from official sources, on the residents of Denver: what percentage of them voted, gave to charity, and owned a library card. They then surveyed the residents to see if the percentages would match. The results were, at the time, shocking. What the residents reported to the surveys was very different from the data the researchers had gathered. Even though nobody gave their names, people, in large numbers, exaggerated their voter registration status, voting behaviour, and charitable giving.

The article is here.

Using Virtual Reality to Assess Ethical Decisions in Road Traffic Scenarios

Leon R. Sütfeld, Richard Gast, Peter König and Gordon Pipa
Front. Behav. Neurosci., 05 July 2017

Self-driving cars are posing a new challenge to our ethics. By using algorithms to make decisions in situations where harming humans is possible, probable, or even unavoidable, a self-driving car's ethical behavior comes pre-defined. Ad hoc decisions are made in milliseconds, but can be based on extensive research and debates. The same algorithms are also likely to be used in millions of cars at a time, increasing the impact of any inherent biases, and increasing the importance of getting it right. Previous research has shown that moral judgment and behavior are highly context-dependent, and comprehensive and nuanced models of the underlying cognitive processes are out of reach to date. Models of ethics for self-driving cars should thus aim to match human decisions made in the same context. We employed immersive virtual reality to assess ethical behavior in simulated road traffic scenarios, and used the collected data to train and evaluate a range of decision models. In the study, participants controlled a virtual car and had to choose which of two given obstacles they would sacrifice in order to spare the other. We randomly sampled obstacles from a variety of inanimate objects, animals and humans. Our model comparison shows that simple models based on one-dimensional value-of-life scales are suited to describe human ethical behavior in these situations. Furthermore, we examined the influence of severe time pressure on the decision-making process. We found that it decreases consistency in the decision patterns, thus providing an argument for algorithmic decision-making in road traffic. This study demonstrates the suitability of virtual reality for the assessment of ethical behavior in humans, delivering consistent results across subjects, while closely matching the experimental settings to the real world scenarios in question.

The article is here.

Tuesday, July 25, 2017

Should a rapist get Viagra or a robber get a cataracts op?

Tom Douglas
Aeon Magazine
Originally published on July 7, 2017

Suppose a physician is about to treat a patient for diminished sex drive when she discovers that the patient – let’s call him Abe – has raped several women in the past. Fearing that boosting his sex drive might lead Abe to commit further sex offences, she declines to offer the treatment. Refusal to provide medical treatment in this case strikes many as reasonable. It might not be entirely unproblematic, since some will argue that he has a human right to medical treatment, but many of us would probably think the physician is within her rights – she’s not obliged to treat Abe. At least, not if her fears about further offending are well-founded.

But now consider a different case. Suppose an eye surgeon is about to book Bert in for a cataract operation when she discovers that he is a serial bank robber. Fearing that treating his developing blindness might help Bert to carry off further heists, she declines to offer the operation. In many ways, this case mirrors that of Abe. But morally, it seems different. In this case, refusing treatment does not seem reasonable, no matter how well-founded the surgeon’s fear. What’s puzzling is why. Why is Bert’s surgeon obliged to treat his blindness, while Abe’s physician has no similar obligation to boost his libido?

Here’s an initial suggestion: diminished libido, it might be said, is not a ‘real disease’. An inconvenience, certainly. A disability, perhaps. But a genuine pathology? No. By contrast, cataract disease clearly is a true pathology. So – the argument might go – Bert has a stronger claim to treatment than Abe. But even if reduced libido is not itself a disease – a view that could be contested – it could have pathological origins. Suppose Abe has a disease that suppresses testosterone production, and thus libido. And suppose that the physician’s treatment would restore his libido by correcting this disease. Still, it would seem reasonable for her to refuse the treatment, if she had good grounds to believe providing it could result in further sex offences.

A new breed of scientist, with brains of silicon

John Bohannon
Science Magazine
Originally published July 5, 2017

Here is an excerpt:

But here’s the key difference: When the robots do finally discover the genetic changes that boost chemical output, they don’t have a clue about the biochemistry behind their effects.

Is it really science, then, if the experiments don’t deepen our understanding of how biology works? To Kimball, that philosophical point may not matter. “We get paid because it works, not because we understand why.”

So far, Hoffman says, Zymergen’s robotic lab has boosted the efficiency of chemical-producing microbes by more than 10%. That increase may not sound like much, but in the $160-billion-per-year sector of the chemical industry that relies on microbial fermentation, a fractional improvement could translate to more money than the entire $7 billion annual budget of the National Science Foundation. And the advantageous genetic changes that the robots find represent real discoveries, ones that human scientists probably wouldn’t have identified. Most of the output-boosting genes are not directly related to synthesizing the desired chemical, for instance, and half have no known function. “I’ve seen this pattern now in several different microbes,” Dean says. Finding the right genetic combinations without machine learning would be like trying to crack a safe with thousands of numbers on its dial. “Our intuitions are easily overwhelmed by the complexity,” he says.

The article is here.

Monday, July 24, 2017

GOP Lawmakers Buy Health Insurance Stocks as Repeal Efforts Move Forward

Lee Fang
The Intercept
Originally posted July 6, 2017

Here is an excerpt:

The issue of insider political trading, with members and staff buying and selling stock using privileged information, has continued to plague Congress. It gained national prominence during the confirmation hearings for Health and Human Services Secretary Tom Price, when it was revealed that the Georgia Republican had bought shares in Innate Immunotherapeutics, a relatively obscure Australian biotechnology firm, while legislating on policies that could have impacted the firm’s performance.

The stock advice had been passed to Price from Rep. Chris Collins, R-N.Y., a board member for Innate Immunotherapeutics, and was shared with a number of other GOP lawmakers, who also invested in the firm. Conaway, records show, bought shares in the company a week after Price.

Conaway, who serves as a GOP deputy whip in the House, has a long record of investing in firms that coincide with his official duties. Politico reported that Conaway’s wife purchased stock in a nuclear firm just after Conaway sponsored a bill to deal with nuclear waste storage in his district. The firm stood to directly benefit from the legislation.

Some of the biggest controversies stem from the revelation that during the 2008 financial crisis, multiple lawmakers from both parties rearranged their financial portfolios to avoid heavy losses. In one case, former Rep. Spencer Baucus, R-Ala., used confidential meetings about the unfolding bank crisis to make special trades designed to increase in value as the stock market plummeted.

The article is here.

Even the Insured Often Can't Afford Their Medical Bills

Helaine Olen
The Atlantic
Originally published June 18, 2017

Here is an excerpt:

The current debate over the future of the Affordable Care Act is obscuring a more pedestrian reality. Just because a person is insured, it doesn’t mean he or she can actually afford their doctor, hospital, pharmaceutical, and other medical bills. The point of insurance is to protect patients’ finances from the costs of everything from hospitalizations to prescription drugs, but out-of-pocket spending for people even with employer-provided health insurance has increased by more than 50 percent since 2010, according to human resources consultant Aon Hewitt. The Kaiser Family Foundation reports that in 2016, half of all insurance policy-holders faced a deductible, the amount people need to pay on their own before their insurance kicks in, of at least $1,000. For people who buy their insurance via one of the Affordable Care Act’s exchanges, that figure will be higher still: Almost 90 percent have deductibles of $1,300 for an individual or $2,600 for a family.

Even a gold-plated insurance plan with a low deductible and generous reimbursements often has its holes. Many people have separate—and often hard-to-understand—in-network and out-of-network deductibles, or lack out-of-network coverage altogether.  Expensive pharmaceuticals are increasingly likely to require a significantly higher co-pay or not be covered at all. While many plans cap out-of-pocket spending, that cap can often be quite high—in 2017, it’s $14,300 for a family plan purchased on the ACA exchanges, for example. Depending on the plan, medical care received from a provider not participating in a particular insurer’s network might not count toward any deductible or cap at all.

The article is here.

Sunday, July 23, 2017

Stop Obsessing Over Race and IQ

John McWhorter
The National Review
Originally published July 5, 2017

Here are three excerpts:

Suppose that, at the end of the day, people of African descent have lower IQs on average than do other groups of humans, and that this gap is caused, at least in part, by genetic differences.


There is, however, a question that those claiming black people are genetically predisposed to have lower IQs than others fail to answer: What, precisely, would we gain from discussing this particular issue?


A second purpose of being “honest” about a racial IQ gap would be the opposite of the first: We might take the gap as a reason for giving not less but more attention to redressing race-based inequities. That is, could we imagine an America in which it was accepted that black people labored — on average, of course — under an intellectual handicap, and an enlightened, compassionate society responded with a Great Society–style commitment to the uplift of the people thus burdened?

I am unaware of any scholar or thinker who has made this argument, perhaps because it, too, is an obvious fantasy. Officially designating black people as a “special needs” race perpetually requiring compensatory assistance on the basis of their intellectual inferiority would run up against the same implacable resistance as condemning them to menial roles for the same reason. The impulse that rejects the very notion of IQ differences between races will thrive despite any beneficent intentions founded on belief in such differences.

The article is here.

Saturday, July 22, 2017

Mapping Cognitive Structure onto the Landscape of Philosophical Debate

An Empirical Framework with Relevance to Problems of Consciousness, Free will and Ethics

Jared P. Friedman & Anthony I. Jack
Review of Philosophy and Psychology
pp 1–41


There has been considerable debate in the literature as to whether work in experimental philosophy (X-Phi) actually makes any significant contribution to philosophy. One stated view is that many X-Phi projects, notwithstanding their focus on topics relevant to philosophy, contribute little to philosophical thought. Instead, it has been claimed the contribution they make appears to be to cognitive science. In contrast to this view, here we argue that at least one approach to X-Phi makes a contribution which parallels, and also extends, historically salient forms of philosophical analysis, especially contributions from Immanuel Kant, William James, Peter F. Strawson and Thomas Nagel. The framework elaborated here synthesizes philosophical theory with empirical evidence from psychology and neuroscience and applies it to three perennial philosophical problems. According to this account, the origin of these three problems can be illuminated by viewing them as arising from a tension between two distinct types of cognition, each of which is associated with anatomically independent and functionally inhibitory neural networks. If the parallel we draw, between an empirical project and historically highly influential examples of philosophical analysis, is viewed as convincing, it follows that work in the cognitive sciences can contribute directly to philosophy. Further, this conclusion holds whether the empirical details of the account are correct or not.

The article is here.

Friday, July 21, 2017

Judgment Before Emotion: People Access Moral Evaluations Faster than Affective States

Corey Cusimano, Stuti Thapa Magar, & Bertram F. Malle


Theories about the role of emotions in moral cognition make different predictions about the relative speed of moral and affective judgments: those that argue that felt emotions are causal inputs to moral judgments predict that recognition of affective states should precede moral judgments; theories that posit emotional states as the output of moral judgment predict the opposite. Across four studies, using a speeded reaction time task, we found that self-reports of felt emotion were delayed relative to reports of event-directed moral judgments (e.g. badness) and were no faster than person directed moral judgments (e.g. blame). These results pose a challenge to prominent theories arguing that moral judgments are made on the basis of reflecting on affective states.

The article is here.

Enabling torture: APA, clinical psychology training and the failure to disobey.

Alice LoCicero, Robert P. Marlin, David Jull-Patterson, Nancy M. Sweeney, Brandon Lee Gray, & J. Wesley Boyd
Peace and Conflict: Journal of Peace Psychology, Vol 22(4), Nov 2016, 345-355.


The American Psychological Association (APA) has historically had close ties with the U.S. Department of Defense (DOD). Recent revelations describe problematic outcomes of those ties, as some in the APA colluded with the DOD to allow psychologists to participate, with expectation of impunity, in harsh interrogations that amounted to torture of Guantanamo detainees, during the Bush era. We now know that leaders in the APA purposely misled psychologists about the establishment of policies on psychologists’ roles in interrogations. Still, the authors wondered why, when the resulting policies reflected a clear contradiction of the fundamental duty to do no harm, few psychologists, in or out of the military, protested the policies articulated in 2005 by the committee on Psychological Ethics and National Security (PENS). Previous research suggested that U.S. graduate students in clinical psychology receive little or no training in the duties of psychologists in military settings or in the ethical guidance offered by international treaties. Thus psychologists might not have been well prepared to critique the PENS policies or to refuse to participate in interrogations. To further explore this issue, the authors surveyed Directors of Clinical Training of doctoral programs in clinical psychology, asking how extensively their programs address dilemmas psychologists may face in military settings. The results indicate that most graduate programs offer little attention to dilemmas of unethical orders, violations of international conventions, or excessively harsh interrogations. These findings, combined with earlier studies, suggest that military psychologists may have been unprepared to address ethical dilemmas, whereas psychologists outside the military may have been unprepared to critique the APA’s collusion with the DOD. The authors suggest ways to address this apparent gap in ethics education for psychology graduate students, interns, and fellows.

The article is here.