Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, philosophy and health care

Sunday, October 22, 2017

A Car Crash And A Mistrial Cast Doubts On Court-Ordered Mental Health Exams

Steve Burger
Side Effect Media: Public Health/Personal Stories
Originally posted September 26, 2017

Here is an excerpt:

Investigating a lie

Fink was often hired by the courts in Indiana, and over the last ten years had performed dozens of these competency evaluations. His scene-of-the-crash confession called into question not only the Loving trial, but every one he ever worked on.

Courts rely on psychologists to assess the mental fitness of defendants, but Fink’s story raises serious questions about how courts determine mental competency in Indiana and what system of oversight is in place to ensure defendants get a valid examination.

The judge declared a mistrial in Caleb Loving’s case, but Fink’s confession prompted a massive months-long investigation in Vanderburgh County.

Hermann led the investigation, working to untangle a mess of nearly 70 cases for which Fink performed exams or testing, determined to discover the extent of the damage he had done.

“A lot of different agencies participated in that investigation,” Herman said. “It was a troubling case, in that someone who was literally hired by the court to come in and testify about something … [was] lying.”

The county auditor’s office provided payment histories of psychologists hired by the courts, and the Evansville Police Department spent hundreds of hours looking through records. The courts helped Hermann get access to the cases that Albert Fink had worked on.

Trump's ethics critics get their day in court

Julia Horowitz 
Originally published October 17, 2017

Ethics experts have been pressing President Trump in the media for months. On Wednesday, they'll finally get their day in court.

At the center of a federal lawsuit in New York is the U.S. Constitution's Foreign Emoluments Clause, which bars the president from accepting gifts from foreign governments without permission from Congress.

Citizens for Responsibility and Ethics in Washington, a watchdog group, will lay out its case before Judge George Daniels. Lawyers for the Justice Department have asked the judge to dismiss the case.

The obscure provision of the Constitution is an issue because Trump refused to sell his business holdings before the inauguration. Instead, he placed his assets in a trust and handed the reins of the Trump Organization to his two oldest sons, Don Jr. and Eric.

The terms of the trust make it so Trump can technically withdraw cash payments from his businesses any time he wants. He can also dissolve the trust when he leaves office -- so if his businesses do well, he'll ultimately profit.

CREW claims that because government leaders and entities frequent his hotels, clubs and restaurants, Trump is in breach of the Emoluments Clause. The fear is that international officials will try to curry favor with Trump by patronizing his properties.

The article is here.

Saturday, October 21, 2017

Thinking about the social cost of technology

Natasha Lomas
Tech Crunch
Originally posted September 30, 2017

Here is an excerpt:

Meanwhile, ‘users’ like my mum are left with another cryptic puzzle of unfamiliar pieces to try to slot back together and — they hope — return the tool to the state of utility it was in before everything changed on them again.

These people will increasingly feel left behind and unplugged from a society where technology is playing an ever greater day-to-day role, and also playing an ever greater, yet largely unseen role in shaping day to day society by controlling so many things we see and do. AI is the silent decision maker that really scales.

The frustration and stress caused by complex technologies that can seem unknowable — not to mention the time and mindshare that gets wasted trying to make systems work as people want them to work — doesn’t tend to get talked about in the slick presentations of tech firms with their laser pointers fixed on the future and their intent locked on winning the game of the next big thing.

All too often the fact that human lives are increasingly enmeshed with and dependent on ever more complex, and ever more inscrutable, technologies is considered a good thing. Negatives don’t generally get dwelled on. And for the most part people are expected to move along, or be moved along by the tech.

That’s the price of progress, goes the short sharp shrug. Users are expected to use the tool — and take responsibility for not being confused by the tool.

But what if the user can’t properly use the system because they don’t know how to? Are they at fault? Or is it the designers failing to properly articulate what they’ve built and pushed out at such scale? And failing to layer complexity in a way that does not alienate and exclude?

And what happens when the tool becomes so all consuming of people’s attention and so capable of pushing individual buttons it becomes a mainstream source of public opinion? And does so without showing its workings. Without making it clear it’s actually presenting a filtered, algorithmically controlled view.

There’s no newspaper style masthead or TV news captions to signify the existence of Facebook’s algorithmic editors. But increasingly people are tuning in to social media to consume news.

This signifies a major, major shift.

The article is here.

Stunner On Birth Control: Trump’s Moral Exemption Is Geared To Just 2 Groups

Julie Rovner
Kaiser Health News
Originally posted October 16, 2017

Here is an excerpt:

So what’s the difference between religious beliefs and moral convictions?

“Theoretically, it would be someone who says ‘I don’t have a belief in God,’ but ‘I oppose contraception for reasons that have nothing to do with religion or God,’ ” said Mark Rienzi, a senior counsel for the Becket Fund for Religious Liberty, which represented many of the organizations that sued the Obama administration over the contraceptive mandate.

Nicholas Bagley, a law professor at the University of Michigan, said it would apply to “an organization that has strong moral convictions but does not associate itself with any particular religion.”

What kind of an organization would that be? It turns out not to be such a mystery, Rienzi and Bagley agreed.

Among the hundreds of organizations that sued over the mandate, two — the Washington, D.C.-based March for Life and the Pennsylvania-based Real Alternatives — are anti-abortion groups that do not qualify for religious exemptions. While their employees may be religious, the groups themselves are not.

The article is here.

Friday, October 20, 2017

A virtue ethics approach to moral dilemmas in medicine

P Gardiner
J Med Ethics. 2003 Oct; 29(5): 297–302.


Most moral dilemmas in medicine are analysed using the four principles with some consideration of consequentialism but these frameworks have limitations. It is not always clear how to judge which consequences are best. When principles conflict it is not always easy to decide which should dominate. They also do not take account of the importance of the emotional element of human experience. Virtue ethics is a framework that focuses on the character of the moral agent rather than the rightness of an action. In considering the relationships, emotional sensitivities, and motivations that are unique to human society it provides a fuller ethical analysis and encourages more flexible and creative solutions than principlism or consequentialism alone. Two different moral dilemmas are analysed using virtue ethics in order to illustrate how it can enhance our approach to ethics in medicine.

A pdf download of the article can be found here.

Note from John: This article is interesting for a myriad of reasons. For me, we ethics educators have come a long way in 14 years.

The American Psychological Association and torture: How could it happen?

Bryan Welch
International Journal of Applied Psychoanalytic Studies
Volume 14 (2)

Here is an excerpt:

This same grandiosity was ubiquitous in the governance's rhetoric at the heart of the association's discussions on torture. Banning psychologists' participation in reputed torture mills was clearly unnecessary, proponents of the APA policy argued. To do so would be an “insult” to military psychologists everywhere. No psychologist would ever engage in torture. Insisting on a change in APA policy reflected a mean-spirited attitude toward the military psychologists. The supporters of the APA policy managed to transform the military into the victims in the interrogation issue.

In the end, however, it was psychologists' self-assumed importance that carried the day on the torture issue. Psychologists' participation in these detention centers, it was asserted, was an antidote to torture, since psychologists' very presence could protect the potential torture victims (presumably from Rumsfeld and Cheney, no less!). The debates on the APA Council floor, year after year, concluded with the general consensus that, indeed, psychology was very, very important to our nation's security. In fact the APA Ethics Director repeatedly advised members of the APA governance that psychologists' presence was necessary to make sure the interrogations were “safe, legal, ethical, and effective.”

We psychologists were both too good and too important to join our professional colleagues in other professions who were taking an absolutist moral position against one of the most shameful eras in our country's history. While the matter was clearly orchestrated by others, it was this self-reinforcing grandiosity that led the traditionally liberal APA governance down the slippery slope to the Bush administration's torture program.

During this period I had numerous personal communications with members of the APA governance structure in an attempt to dissuade them from ignoring the rank-and-file psychologists who abhorred the APA's position. I have been involved in many policy disagreements over the course of my career, but the smugness and illogic that characterized the response to these efforts were astonishing and went far beyond normal, even heated, give and take. Most dramatically, the intelligence that I have always found to characterize the profession of psychology was sorely lacking.

Thursday, October 19, 2017

‘But you can’t do that!’ Why immoral actions seem impossible

Jonathan Phillips
Originally posted September 29, 2017

Suppose that you’re on the way to the airport to catch a flight, but your car breaks down. Some of the actions you immediately consider are obvious: you might try to call a friend, look for a taxi, or book a later flight. If those don’t work out, you might consider something more far-fetched, such as finding public transportation or getting the tow-truck driver to tow you to the airport. But here’s a possibility that would likely never come to mind: you could take a taxi but not pay for it when you get to the airport. Why wouldn’t you think of this? After all, it’s a pretty sure-fire way to get to the airport on time, and it’s definitely cheaper than having your car towed.

One natural answer is that you don’t consider this possibility because you’re a morally good person who wouldn’t actually do that. But there are at least two reasons why this doesn’t seem like a compelling answer to the question, even if you are morally good. The first is that, though being a good person would explain why you wouldn’t actually do this, it doesn’t seem to explain why you wouldn’t have been able to come up with this as a solution in the first place. After all, your good moral character doesn’t stop you from admitting that it is a way of getting to the airport, even if you wouldn’t go through with it. And the second reason is that it seems equally likely that you wouldn’t have come up with this possibility for someone else in the same situation – even someone whom you didn’t know was morally good.

So what does explain why we don’t consider the possibility of taking a taxi but not paying? Here’s a radically different suggestion: before I mentioned it, you didn’t think it was even possible to do that. This explanation probably strikes you as too strong, but the key to it is that I’m not arguing that you think it’s impossible now, I’m arguing that you didn’t think it was possible before I proposed it.

Is There an Ideal Amount of Income Inequality?

Brian Gallagher
Originally published September 28, 2017

Here is an excerpt:

Is extreme inequality a serious problem?

Extreme inequality in the United States, and elsewhere, is deeply troubling on a number of fronts. First, there is the moral issue. For a country explicitly founded on the principles of liberty, equality, and the pursuit of happiness, protected by the “government of the people, by the people, for the people,” extreme inequality raises troubling questions of social justice that get at the very foundations of our society. We seem to have a “government of the 1 percent by the 1 percent for the 1 percent,” as the economics Nobel laureate Joseph Stiglitz wrote in his Vanity Fair essay. The Harvard philosopher Tim Scanlon argues that extreme inequality is bad for the following reasons: (1) economic inequality can give wealthier people an unacceptable degree of control over the lives of others; (2) economic inequality can undermine the fairness of political institutions; (3) economic inequality undermines the fairness of the economic system itself; and (4) workers, as participants in a scheme of cooperation that produces national income, have a claim to a fair share of what they have helped to produce.

You’re an engineer. How did you get interested in inequality?

I do design, control, optimization, and risk management for a living. I’m used to designing large systems, like chemical plants. I have a pretty good intuition for how systems will operate, how  they can run efficiently, and how they may fail. When I started thinking about the free market and society as systems, I already had an intuitive grasp about their function. Clearly there are differences between a system of inanimate entities, like chemical plants, and human society. But they’re both systems, so there’s a lot of commonalities as well. My experience as a systems engineer helped me as I was groping in the darkness to get my hand around these issues, and to ask the right questions.

The article is here.

Wednesday, October 18, 2017

When Doing Some Good Is Evaluated as Worse Than Doing No Good at All

George E. Newman and Daylian M. Cain
Psychological Science published online 8 January 2014


In four experiments, we found that the presence of self-interest in the charitable domain was seen as tainting: People evaluated efforts that realized both charitable and personal benefits as worse than analogous behaviors that produced no charitable benefit. This tainted-altruism effect was observed in a variety of contexts and extended to both moral evaluations of other agents and participants’ own behavioral intentions (e.g., reported willingness to hire someone or purchase a company’s products). This effect did not seem to be driven by expectations that profits would be realized at the direct cost of charitable benefits, or the explicit use of charity as a means to an end. Rather, we found that it was related to the accessibility of different counterfactuals: When someone was charitable for self-interested reasons, people considered his or her behavior in the absence of self-interest, ultimately concluding that the person did not behave as altruistically as he or she could have. However, when someone was only selfish, people did not spontaneously consider whether the person could have been more altruistic.

The article is here.

Danny Kahneman on AI versus Humans

NBER Economics of AI Workshop 2017

Here is a rough translation of an excerpt:

One point made yesterday was the uniqueness of humans when it comes to evaluations. It was called “judgment”. Here in my noggin it’s “evaluation of outcomes”: the utility side of the decision function. I really don’t see why that should be reserved to humans.

I’d like to make the following argument:
  1. The main characteristic of people is that they’re very “noisy”.
  2. You show them the same stimulus twice, they don’t give you the same response twice.
  3. You show the same choice twice I mean—that’s why we had stochastic choice theory because thereis so much variability in people’s choices given the same stimuli.
  4. Now what can be done even without AI is a program that observes an individual that will be better than the individual and will make better choices for the individual by because it will be noise-free.
  5. We know from the literature that Colin cited on predictions an interesting tidbit:
  6. If you take clinicians and you have them predict some criterion a large number of time and then you develop a simple equation that predicts not the outcome but the clinicians judgment, that model does better in predicting the outcome then the clinician.
  7. That is fundamental.
This is telling you that one of the major limitations on human performance is not bias it is just noise.
I’m maybe partly responsible for this, but people now when they talk about error tend to think of bias as an explanation: the first thing that comes to mind. Well, there is bias. And it is an error. But in fact most of the errors that people make are better viewed as this random noise. And there’s an awful lot of it.

The entire transcript and target article is here.

Tuesday, October 17, 2017

Work and the Loneliness Epidemic

Vivek Murphy
Harvard Business Review

Here is an excerpt:

During my years caring for patients, the most common pathology I saw was not heart disease or diabetes; it was loneliness. The elderly man who came to our hospital every few weeks seeking relief from chronic pain was also looking for human connection: He was lonely. The middle-aged woman battling advanced HIV who had no one to call to inform that she was sick: She was lonely too. I found that loneliness was often in the background of clinical illness, contributing to disease and making it harder for patients to cope and heal.

This may not surprise you. Chances are, you or someone you know has been struggling with loneliness. And that can be a serious problem. Loneliness and weak social connections are associated with a reduction in lifespan similar to that caused by smoking 15 cigarettes a day and even greater than that associated with obesity. But we haven’t focused nearly as much effort on strengthening connections between people as we have on curbing tobacco use or obesity. Loneliness is also associated with a greater risk of cardiovascular disease, dementia, depression, and anxiety. At work, loneliness reduces task performance, limits creativity, and impairs other aspects of executive function such as reasoning and decision making. For our health and our work, it is imperative that we address the loneliness epidemic quickly.

Once we understand the profound human and economic costs of loneliness, we must determine whose responsibility it is to address the problem.

The article is here.

Is it Ethical for Scientists to Create Nonhuman Primates with Brain Disorders?

Carolyn P. Neuhaus
The Hastings Center
Originally published on September 25, 2017

Here is an excerpt:

Such is the rationale for creating primate models: the brain disorders under investigation cannot be accurately modelled in other nonhuman organisms, because of differences in genetics, brain structure, and behaviors. But research involving humans with brain disorders is also morally fraught. Some people with brain disorders experience impairments to decision-making capacity as a component or symptom of disease, and therefore are unable to provide truly informed consent to research participation. Some of the research is too invasive, and would be grossly unethical to carry out with human subjects. So, nonhuman primates, and macaques in particular, occupy a “sweet spot.” Their genetic code and brain structure are sufficiently similar to humans’ so as to provide a valid and accurate model of human brain disorders. But, they are not conferred protections from research that apply to humans and to some non-human primates, notably chimpanzees and great apes. In the United States, for example, chimpanzees are protected from invasive research, but other primates are not. Some have suggested, including in a recent article in Journal of Medical Ethics, that protections like those afforded to chimpanzees ought to be extended to other primates and other animals, such as dogs, as evidence mounts that they also have complex cognitive, social, and emotional lives. For now, macaques and other primates remain in use.

Prior to the discovery of genome editing tools like ZFNs, TALENs, and most recently, CRISPR, it was extremely challenging, almost to the point of prohibitive, to create non-human primates with precise, heritable genome modifications. But CRISPR (Clustered Randomized Interspersed Palindromic Repeat) presents a technological advance that brings genome engineering of non-human primates well within reach.

The article is here.

Monday, October 16, 2017

Can we teach robots ethics?

Dave Edmonds
Originally published October 15, 2017

Here is an excerpt:

However machine learning throws up problems of its own. One is that the machine may learn the wrong lessons. To give a related example, machines that learn language from mimicking humans have been shown to import various biases. Male and female names have different associations. The machine may come to believe that a John or Fred is more suitable to be a scientist than a Joanna or Fiona. We would need to be alert to these biases, and to try to combat them.

A yet more fundamental challenge is that if the machine evolves through a learning process we may be unable to predict how it will behave in the future; we may not even understand how it reaches its decisions. This is an unsettling possibility, especially if robots are making crucial choices about our lives. A partial solution might be to insist that if things do go wrong, we have a way to audit the code - a way of scrutinising what's happened. Since it would be both silly and unsatisfactory to hold the robot responsible for an action (what's the point of punishing a robot?), a further judgement would have to be made about who was morally and legally culpable for a robot's bad actions.

One big advantage of robots is that they will behave consistently. They will operate in the same way in similar situations. The autonomous weapon won't make bad choices because it is angry. The autonomous car won't get drunk, or tired, it won't shout at the kids on the back seat. Around the world, more than a million people are killed in car accidents each year - most by human error. Reducing those numbers is a big prize.

The article is here.

No Child Left Alone: Moral Judgments about Parents Affect Estimates of Risk to Children

Thomas, A. J., Stanford, P. K., & Sarnecka, B. W. (2016).
Collabra, 2(1), 10.


In recent decades, Americans have adopted a parenting norm in which every child is expected to be under constant direct adult supervision. Parents who violate this norm by allowing their children to be alone, even for short periods of time, often face harsh criticism and even legal action. This is true despite the fact that children are much more likely to be hurt, for example, in car accidents. Why then do bystanders call 911 when they see children playing in parks, but not when they see children riding in cars? Here, we present results from six studies indicating that moral judgments play a role: The less morally acceptable a parent’s reason for leaving a child alone, the more danger people think the child is in. This suggests that people’s estimates of danger to unsupervised children are affected by an intuition that parents who leave their children alone have done something morally wrong.

Here is part of the discussion:

The most important conclusion we draw from this set of experiments is the following: People don’t only think that leaving children alone is dangerous and therefore immoral. They also think it is immoral and therefore dangerous. That is, people overestimate the actual danger to children who are left alone by their parents, in order to better support or justify their moral condemnation of parents who do so.

This brings us back to our opening question: How can we explain the recent hysteria about unsupervised children, often wildly out of proportion to the actual risks posed by the situation? Our findings suggest that once a moralized norm of ‘No child left alone’ was generated, people began to feel morally outraged by parents who violated that norm. The need (or opportunity) to better support or justify this outrage then elevated people’s estimates of the actual dangers faced by children. These elevated risk estimates, in turn, may have led to even stronger moral condemnation of parents and so on, in a self-reinforcing feedback loop.

The article is here.

Sunday, October 15, 2017

Official sends memo to agency leaders about ethical conduct

Avery Anapol
The Hill
Originally published October 10, 2017

The head of the Office of Government Ethics is calling on the leaders of government agencies to promote an “ethical culture.”

David Apol, acting director of the ethics office, sent a memo to agency heads titled, “The Role of Agency Leaders in Promoting an Ethical Culture.” The letter was sent to more than 100 agency heads, CNN reported.

“It is essential to the success of our republic that citizens can trust that your decisions and the decisions made by your agency are motivated by the public good and not by personal interests,” the memo reads.

Several government officials are under investigation for their use of chartered planes for government business.

One Cabinet official, former Health secretary Tom Price, resigned over his use of private jets. Treasury Secretary Steven Mnuchin is also under scrutiny for his travels.

“I am deeply concerned that the actions of some in Government leadership have harmed perceptions about the importance of ethics and what conduct is, and is not, permissible,” Apol wrote.

The memo includes seven suggested actions that Apol says leaders should take to strengthen the ethical culture in their agencies. The suggestions include putting ethics officials in senior leadership meetings, and “modeling a ‘Should I do it?’ mentality versus a ‘Can I do it?’ mentality.”

The article is here.

Saturday, October 14, 2017

Who Sees What as Fair? Mapping Individual Differences in Valuation of Reciprocity, Charity,and Impartiality

Laura Niemi and Liane Young
Social Justice Research

When scarce resources are allocated, different criteria may be considered: impersonal allocation (impartiality), the needs of specific individuals (charity), or the relational ties between individuals (reciprocity). In the present research, we investigated how people’s perspectives on fairness relate to individual differences in interpersonal orientations. Participants evaluated the fairness of allocations based on (a) impartiality (b) charity, and (c) reciprocity. To assess interpersonal orientations, we administered measures of dispositional empathy (i.e., empathic concern and perspective-taking) and Machiavellianism. Across two studies, Machiavellianism correlated with higher ratings of reciprocity as fair, whereas empathic concern and perspective taking correlated with higher ratings of charity as fair. We discuss these findings in relation to recent neuroscientific research on empathy, fairness, and moral evaluations of resource allocations.

The article is here.

Friday, October 13, 2017

Moral Distress: A Call to Action

The Editor
AMA Journal of Ethics. June 2017, Volume 19, Number 6: 533-536.

During medical school, I was exposed for the first time to ethical considerations that stemmed from my new role in the direct provision of patient care. Ethical obligations were now both personal and professional, and I had to navigate conflicts between my own values and those of patients, their families, and other members of the health care team. However, I felt paralyzed by factors such as my relative lack of medical experience, low position in the hospital hierarchy, and concerns about evaluation. I experienced a profound and new feeling of futility and exhaustion, one that my peers also often described.

I have since realized that this experience was likely “moral distress,” a phenomenon originally described by Andrew Jameton in 1984. For this issue, the following definition, adapted from Jameton, will be used: moral distress occurs when a clinician makes a moral judgment about a case in which he or she is involved and an external constraint makes it difficult or impossible to act on that judgment, resulting in “painful feelings and/or psychological disequilibrium”. Moral distress has subsequently been shown to be associated with burnout, which includes poor coping mechanisms such as moral disengagement, blunting, denial, and interpersonal conflict.

Moral distress as originally conceived by Jameton pertained to nurses and has been extensively studied in the nursing literature. However, until a few years ago, the literature has been silent on the moral distress of medical students and physicians.

The article is here.

Automation on our own terms

Benedict Dellot and Fabian Wallace-Stephens
Originally published September 17, 2017

Here is an excerpt:

There are three main risks of embracing AI and robotics unreservedly:
  1. A rise in economic inequality — To the extent that technology deskills jobs, it will put downward pressure on earnings. If jobs are removed altogether as a result of automation, the result will be greater returns for those who make and deploy the technology, as well as the elite workers left behind in firms. The median OECD country has already seen a decrease in its labour share of income of about 5 percentage points since the early 1990s, with capital’s share swallowing the difference. Another risk here is market concentration. If large firms continue to adopt AI and robotics at a faster rate than small firms, they will gain enormous efficiency advantages and as a result could take excessive share of markets. Automation could lead to oligopolistic markets, where a handful of firms dominate at the expense of others.
  2. A deepening of geographic disparities — Since the computer revolution of the 1980s, cities that specialise in cognitive work have gained a comparative advantage in job creation. In 2014, 5.5 percent of all UK workers operated in new job types that emerged after 1990, but the figure for workers in London was almost double that at 9.8 percent. The ability of cities to attract skilled workers, as well as the diverse nature of their economies, makes them better placed than rural areas to grasp the opportunities of AI and robotics. The most vulnerable locations will be those that are heavily reliant on a single automatable industry, such as parts of the North East that have a large stock of call centre jobs.
  3. An entrenchment of demographic biases — If left untamed, automation could disadvantage some demographic groups. Recall our case study analysis of the retail sector, which suggested that AI and robotics might lead to fewer workers being required in bricks and mortar shops, but more workers being deployed in warehouse operative roles. Given women are more likely to make up the former and men the latter, automation in this case could exacerbate gender pay and job differences. It is also possible that the use of AI in recruitment (e.g. algorithms that screen CVs) could amplify workplace biases and block people from employment based on their age, ethnicity or gender.

Thursday, October 12, 2017

The Data Scientist Putting Ethics In AI

By Poornima Apte
The Daily Dose
Originally published SEPT 25 2017

Here is an excerpt:

Chowdhury’s other personal goal — to make AI accessible to everyone — is noble, but if the technology’s ramifications are not yet fully known, might it not also be dangerous? Doomsday scenarios — AI as the rapacious monster devouring all our jobs — put forward in the media may not be in our immediate futures, but Alexandra Whittington does worry that implicit human biases could make their way into the AI of the future — a problem that might be exacerbated if not accounted for early on, before any democratization of the tools occurs. Whittington is a futurist and foresight director at Fast Future. She points to a recent example of AI in law where the “robot-lawyer” was named Ross, and the legal assistant had a woman’s name, Cara. “You look at Siri and Cortana, they’re women, right?” Whittington says. “But they’re assistants, not the attorney or the accountant.” It’s the whole garbage-in, garbage-out theory, she says, cautioning against an overly idealistic approach toward the technology.

The article is here.

New Theory Cracks Open the Black Box of Deep Learning

Natalie Wolchover
Quanta Magazine
Originally published September 21, 2017

Here is an excerpt:

In the talk, Naftali Tishby, a computer scientist and neuroscientist from the Hebrew University of Jerusalem, presented evidence in support of a new theory explaining how deep learning works. Tishby argues that deep neural networks learn according to a procedure called the “information bottleneck,” which he and two collaborators first described in purely theoretical terms in 1999. The idea is that a network rids noisy input data of extraneous details as if by squeezing the information through a bottleneck, retaining only the features most relevant to general concepts. Striking new computer experiments by Tishby and his student Ravid Shwartz-Ziv reveal how this squeezing procedure happens during deep learning, at least in the cases they studied.

Tishby’s findings have the AI community buzzing. “I believe that the information bottleneck idea could be very important in future deep neural network research,” said Alex Alemi of Google Research, who has already developed new approximation methods for applying an information bottleneck analysis to large deep neural networks. The bottleneck could serve “not only as a theoretical tool for understanding why our neural networks work as well as they do currently, but also as a tool for constructing new objectives and architectures of networks,” Alemi said.

Some researchers remain skeptical that the theory fully accounts for the success of deep learning, but Kyle Cranmer, a particle physicist at New York University who uses machine learning to analyze particle collisions at the Large Hadron Collider, said that as a general principle of learning, it “somehow smells right.”

The article is here.

Wednesday, October 11, 2017

Moral programming will define the future of autonomous transportation

Josh Althauser
Venture Beat
Originally published September 24, 2017

Here is an excerpt:

First do no harm?

Regardless of public sentiment, driverless cars are coming. Giants like Tesla Motors and Google have already poured billions of dollars into their respective technologies with reasonable success, and Elon Musk has said that we are much closer to a driverless future than most suspect. Robotics software engineers are making strides in self-driving AI at an awe-inspiring (and, for some, alarming) rate.

Beyond our questions of whether we want to hand over the wheel to software, there are deeper, more troubling questions that must be asked. Regardless of current sentiment, driverless cars are on their way. The real questions we should be asking as we edge closer to completely autonomous roadways lie in ethically complex areas. Among these areas of concern, one very difficult question stands out. Should we program driverless cars to kill?

At first, the answer seems obvious. No AI should have the ability to choose to kill a human. We can more easily reconcile death that results from a malfunction of some kind — brakes that give out, a failure of the car’s visual monitoring system, or a bug in the AI’s programmatic makeup. However, defining how and when AI can inflict harm isn’t that simple.

The article is here.

The guide psychologists gave carmakers to convince us it’s safe to buy self-driving cars

Olivia Goldhill
Originally published September 17, 2017

Driverless cars sound great in theory. They have the potential to save lives, because humans are erratic, distracted, and often bad drivers. Once the technology is perfected, machines will be far better at driving safely.

But in practice, the notion of putting your life into the hands of an autonomous machine—let alone facing one as a pedestrian—is highly unnerving. Three out of four Americans are afraid to get into a self-driving car, an American Automobile Association survey found earlier this year.

Carmakers working to counter those fears and get driverless cars on the road have found an ally in psychologists. In a paper published this week in Nature Human Behavior, three professors from MIT Media Lab, Toulouse School of Economics, and the University of California at Irvine discuss widespread concerns and suggest psychological techniques to help allay them:

Who wants to ride in a car that would kill them to save pedestrians?

First, they address the knotty problem of how self-driving cars will be programmed to respond if they’re in a situation where they must either put their own passenger or a pedestrian at risk. This is a real world version of an ethical dilemma called “The Trolley Problem.”

The article is here.

Tuesday, October 10, 2017

How AI & robotics are transforming social care, retail and the logistics industry

Benedict Dellot and Fabian Wallace-Stephens
Originally published September 18, 2017

Here is an excerpt:

The CHIRON project

CHIRON is a two year project funded by Innovate UK. It strives to design care robotics for the future with a focus on dignity, independence and choice. CHIRON is a set of intelligent modular robotic systems, located in multiple positions around the home. Among its intended uses are to help people with personal hygiene tasks in the morning, get ready for the day, and support them in preparing meals in the kitchen. CHIRON’s various components can be mixed and matched to enable the customer to undertake a wide range of domestic and self-care tasks independently, or to enable a care worker to assist an increased number of customers.

The vision for CHIRON is to move from an ‘end of life’ institutional model, widely regarded as unsustainable and not fit for purpose, to a more dynamic and flexible market that offers people greater choice in the care sector when they require it.

The CHIRON project is being managed by a consortium led by Designability. The key technology partners are Bristol Robotics Laboratory and Shadow Robot Company, who have considerable expertise in conducting pioneering research and development in robotics. Award winning social enterprise care provider, Three Sisters Care will bring user-centred design to the core of the project. Smart Homes & Buildings Association will work to introduce the range of devices that will create CHIRON and make it a valuable presence in people’s homes.

The article is here.

Reasons Probably Won’t Change Your Mind: The Role of Reasons in Revising Moral Decisions

Stanley, M. L., Dougherty, A. M., Yang, B. W., Henne, P., & De Brigard, F. (2017).
Journal of Experimental Psychology: General. Advance online publication.


Although many philosophers argue that making and revising moral decisions ought to be a matter of deliberating over reasons, the extent to which the consideration of reasons informs people’s moral decisions and prompts them to change their decisions remains unclear. Here, after making an initial decision in 2-option moral dilemmas, participants examined reasons for only the option initially chosen (affirming reasons), reasons for only the option not initially chosen (opposing reasons), or reasons for both options. Although participants were more likely to change their initial decisions when presented with only opposing reasons compared with only affirming reasons, these effect sizes were consistently small. After evaluating reasons, participants were significantly more likely not to change their initial decisions than to change them, regardless of the set of reasons they considered. The initial decision accounted for most of the variance in predicting the final decision, whereas the reasons evaluated accounted for a relatively small proportion of the variance in predicting the final decision. This resistance to changing moral decisions is at least partly attributable to a biased, motivated evaluation of the available reasons: participants rated the reasons supporting their initial decisions more favorably than the reasons opposing their initial decisions, regardless of the reported strategy used to make the initial decision. Overall, our results suggest that the consideration of reasons rarely induces people to change their initial decisions in moral dilemmas.

The article is here, behind a paywall.

You can contact the lead investigator for a personal copy.

Monday, October 9, 2017

Artificial Human Embryos Are Coming, and No One Knows How to Handle Them

Antonio Regalado
MIT Tech Review
September 19, 2017

Here is an excerpt:

Scientists at Michigan now have plans to manufacture embryoids by the hundreds. These could be used to screen drugs to see which cause birth defects, find others to increase the chance of pregnancy, or to create starting material for lab-generated organs. But ethical and political quarrels may not be far behind. “This is a hot new frontier in both science and bioethics. And it seems likely to remain contested for the coming years,” says Jonathan Kimmelman, a member of the bioethics unit at McGill University, in Montreal, and a leader of an international organization of stem-cell scientists.

What’s really growing in the dish? There no easy answer to that. In fact, no one is even sure what to call these new entities. In March, a team from Harvard University offered the catch-all “synthetic human entities with embryo-like features,” or SHEEFS, in a paper cautioning that “many new varieties” are on the horizon, including realistic mini-brains.

Shao, who is continuing his training at MIT, dug into the ethics question and came to his own conclusions. “Very early on in our research we started to pay attention to why are we doing this? Is it really necessary? We decided yes, we are trying to grow a structure similar to part of the human early embryo that is hard otherwise to study,” says Shao. “But we are not going to generate a complete human embryo. I can’t just consider my feelings. I have to think about society.”

The article is here.

Would We Even Know Moral Bioenhancement If We Saw It?

Wiseman H.
Camb Q Healthc Ethics. 2017;26(3):398-410.


The term "moral bioenhancement" conceals a diverse plurality encompassing much potential, some elements of which are desirable, some of which are disturbing, and some of which are simply bland. This article invites readers to take a better differentiated approach to discriminating between elements of the debate rather than talking of moral bioenhancement "per se," or coming to any global value judgments about the idea as an abstract whole (no such whole exists). Readers are then invited to consider the benefits and distortions that come from the usual dichotomies framing the various debates, concluding with an additional distinction for further clarifying this discourse qua explicit/implicit moral bioenhancement.

The article is here, behind a paywall.

Email the author directly for a personal copy.

Sunday, October 8, 2017

Moral outrage in the digital age

Molly J. Crockett
Nature Human Behaviour (2017)
Originally posted September 18, 2017

Moral outrage is an ancient emotion that is now widespread on digital media and online social networks. How might these new technologies change the expression of moral outrage and its social consequences?

Moral outrage is a powerful emotion that motivates people to shame and punish wrongdoers. Moralistic punishment can be a force for good, increasing cooperation by holding bad actors accountable. But punishment also has a dark side — it can exacerbate social conflict by dehumanizing others and escalating into destructive feuds.

Moral outrage is at least as old as civilization itself, but civilization is rapidly changing in the face of new technologies. Worldwide, more than a billion people now spend at least an hour a day on social media, and moral outrage is all the rage online. In recent years, viral online shaming has cost companies millions, candidates elections, and individuals their careers overnight.

As digital media infiltrates our social lives, it is crucial that we understand how this technology might transform the expression of moral outrage and its social consequences. Here, I describe a simple psychological framework for tackling this question (Fig. 1). Moral outrage is triggered by stimuli that call attention to moral norm violations. These stimuli evoke a range of emotional and behavioural responses that vary in their costs and constraints. Finally, expressing outrage leads to a variety of personal and social outcomes. This framework reveals that digital media may exacerbate the expression of moral outrage by inflating its triggering stimuli, reducing some of its costs and amplifying many of its personal benefits.

The article is here.

Saturday, October 7, 2017

Committee on Publication Ethics: Ethical Guidelines for Peer Reviewers

COPE Council.
Ethical guidelines for peer reviewers. 
September 2017. www.publicationethics.org

Peer reviewers play a role in ensuring the integrity of the scholarly record. The peer review
process depends to a large extent on the trust and willing participation of the scholarly
community and requires that everyone involved behaves responsibly and ethically. Peer
reviewers play a central and critical part in the peer review process, but may come to the role
without any guidance and be unaware of their ethical obligations. Journals have an obligation
to provide transparent policies for peer review, and reviewers have an obligation to conduct
reviews in an ethical and accountable manner. Clear communication between the journal
and the reviewers is essential to facilitate consistent, fair and timely review. COPE has heard
cases from its members related to peer review issues and bases these guidelines, in part, on
the collective experience and wisdom of the COPE Forum participants. It is hoped they will
provide helpful guidance to researchers, be a reference for editors and publishers in guiding
their reviewers, and act as an educational resource for institutions in training their students
and researchers.

Peer review, for the purposes of these guidelines, refers to reviews provided on manuscript
submissions to journals, but can also include reviews for other platforms and apply to public
commenting that can occur pre- or post-publication. Reviews of other materials such as
preprints, grants, books, conference proceeding submissions, registered reports (preregistered
protocols), or data will have a similar underlying ethical framework, but the process
will vary depending on the source material and the type of review requested. The model of
peer review will also influence elements of the process.

The guidelines are here.

Trump Administration Rolls Back Birth Control Mandate

Robert Pear, Rebecca R. Ruiz, and Laurie Godstein
The New York Times
Originally published October 6, 2017

The Trump administration on Friday moved to expand the rights of employers to deny women insurance coverage for contraception and issued sweeping guidance on religious freedom that critics said could also erode civil rights protections for lesbian, gay, bisexual and transgender people.

The twin actions, by the Department of Health and Human Services and the Justice Department, were meant to carry out a promise issued by President Trump five months ago, when he declared in the Rose Garden that “we will not allow people of faith to be targeted, bullied or silenced anymore.”

Attorney General Jeff Sessions quoted those words in issuing guidance to federal agencies and prosecutors, instructing them to take the position in court that workers, employers and organizations may claim broad exemptions from nondiscrimination laws on the basis of religious objections.

At the same time, the Department of Health and Human Services issued two rules rolling back a federal requirement that employers must include birth control coverage in their health insurance plans. The rules offer an exemption to any employer that objects to covering contraception services on the basis of sincerely held religious beliefs or moral convictions.

More than 55 million women have access to birth control without co-payments because of the contraceptive coverage mandate, according to a study commissioned by the Obama administration. Under the new regulations, hundreds of thousands of women could lose those benefits.

The article is here.

Italics added.  And, just when the abortion rate was at pre-1973 levels.

Friday, October 6, 2017

AI Research Is in Desperate Need of an Ethical Watchdog

Sophia Chen
Wired Science
Originally published September 18, 2017

About a week ago, Stanford University researchers posted online a study on the latest dystopian AI: They'd made a machine learning algorithm that essentially works as gaydar. After training the algorithm with tens of thousands of photographs from a dating site, the algorithm could, for example, guess if a white man in a photograph was gay with 81 percent accuracy. The researchers’ motives? They wanted to protect gay people. “[Our] findings expose a threat to the privacy and safety of gay men and women,” wrote Michal Kosinski and Yilun Wang in the paper. They built the bomb so they could alert the public about its dangers.

Alas, their good intentions fell on deaf ears. In a joint statement, LGBT advocacy groups Human Rights Campaign and GLAAD condemned the work, writing that the researchers had built a tool based on “junk science” that governments could use to identify and persecute gay people. AI expert Kate Crawford of Microsoft Research called it “AI phrenology” on Twitter. The American Psychological Association, whose journal was readying their work for publication, now says the study is under “ethical review.” Kosinski has received e-mail death threats.

But the controversy illuminates a problem in AI bigger than any single algorithm. More social scientists are using AI intending to solve society’s ills, but they don’t have clear ethical guidelines to prevent them from accidentally harming people, says ethicist Jake Metcalf of Data & Society. “There aren’t consistent standards or transparent review practices,” he says. The guidelines governing social experiments are outdated and often irrelevant—meaning researchers have to make ad hoc rules as they go.

Right now, if government-funded scientists want to research humans for a study, the law requires them to get the approval of an ethics committee known as an institutional review board, or IRB. Stanford’s review board approved Kosinski and Wang’s study. But these boards use rules developed 40 years ago for protecting people during real-life interactions, such as drawing blood or conducting interviews. “The regulations were designed for a very specific type of research harm and a specific set of research methods that simply don’t hold for data science,” says Metcalf.

The article is here.

Lawsuit Over a Suicide Points to a Risk of Antidepressants

Roni Caryn Rabin
The New York Times
Originally published September 11, 2017

Here is an excerpt:

The case is a rare instance in which a lawsuit over a suicide involving antidepressants actually went to trial; many such cases are either dismissed or settled out of court, said Brent Wisner, of the law firm Baum Hedlund Aristei Goldman, which represented Ms. Dolin.

The verdict is also unusual because Glaxo, which has asked the court to overturn the verdict or to grant a new trial, no longer sells Paxil in the United States and did not manufacture the generic form of the medication Mr. Dolin was taking. The company argues that it should not be held liable for a pill it did not make.

Concerns about safety have long dogged antidepressants, though many doctors and patients consider the medications lifesavers.

Ever since they were linked to an increase in suicidal behaviors in young people more than a decade ago, all antidepressants, including Paxil, have carried a “black box” warning label, reviewed and approved by the Food and Drug Administration, saying that they increase the risk of suicidal thinking and behavior in children, teens and young adults under age 25.

The warning labels also stipulate that the suicide risk has not been seen in short-term studies in anyone over age 24, but urges close monitoring of all patients initiating drug treatment.

The article is here.

Thursday, October 5, 2017

Leadership Takes Self-Control. Here’s What We Know About It

Kai Chi (Sam) Yam, Huiwen Lian, D. Lance Ferris, Douglas Brown
Harvard Business Review
Originally published June 5, 2017

Here is an excerpt:

Our review identified a few consequences that are consistently linked to having lower self-control at work:
  1. Increased unethical/deviant behavior: Studies have found that when self-control resources are low, nurses are more likely to be rude to patients, tax accountants are more likely to engage in fraud, and employees in general engage in various forms of unethical behavior, such as lying to their supervisors, stealing office supplies, and so on.
  2. Decreased prosocial behavior: Depleted self-control makes employees less likely to speak up if they see problems at work, less likely to help fellow employees, and less likely to engage in corporate volunteerism.
  3. Reduced job performance: Lower self-control can lead employees to spend less time on difficult tasks, exert less effort at work, be more distracted (e.g., surfing the internet in working time), and generally perform worse than they would had their self-control been normal.
  4. Negative leadership styles: Perhaps what’s most concerning is that leaders with lower self-control often exhibit counter-productive leadership styles. They are more likely to verbally abuse their followers (rather than using positive means to motivate them), more likely to build weak relationships with their followers, and they are less charismatic. Scholars have estimated that the cost to corporations in the United States for such a negative and abusive behavior is at $23.8 billion annually.
Our review makes clear that helping employees maintain self-control is an important task if organizations want to be more effective and ethical. Fortunately, we identified three key factors that can help leaders foster self-control among employees and mitigate the negative effects of losing self-control.

The article is here.

Biased Algorithms Are Everywhere, and No One Seems to Care

Will Knight
MIT News
Originally published July 12, 2017

Here is an excerpt:

Algorithmic bias is shaping up to be a major societal issue at a critical moment in the evolution of machine learning and AI. If the bias lurking inside the algorithms that make ever-more-important decisions goes unrecognized and unchecked, it could have serious negative consequences, especially for poorer communities and minorities. The eventual outcry might also stymie the progress of an incredibly useful technology (see “Inspecting Algorithms for Bias”).

Algorithms that may conceal hidden biases are already routinely used to make vital financial and legal decisions. Proprietary algorithms are used to decide, for instance, who gets a job interview, who gets granted parole, and who gets a loan.

The founders of the new AI Now Initiative, Kate Crawford, a researcher at Microsoft, and Meredith Whittaker, a researcher at Google, say bias may exist in all sorts of services and products.

“It’s still early days for understanding algorithmic bias,” Crawford and Whittaker said in an e-mail. “Just this year we’ve seen more systems that have issues, and these are just the ones that have been investigated.”

Examples of algorithmic bias that have come to light lately, they say, include flawed and misrepresentative systems used to rank teachers, and gender-biased models for natural language processing.

The article is here.

Wednesday, October 4, 2017

Better Minds, Better Morals: A Procedural Guide to Better Judgment

G. Owen Schaefer and Julian Savulescu
Journal of Posthuman Studies
Vol. 1, No. 1, Journal of Posthuman Studies (2017), pp. 26-43


Making more moral decisions – an uncontroversial goal, if ever there was one. But how to go about it? In this article, we offer a practical guide on ways to promote good judgment in our personal and professional lives. We will do this not by outlining what the good life consists in or which values we should accept. Rather, we offer a theory of  procedural reliability : a set of dimensions of thought that are generally conducive to good moral reasoning. At the end of the day, we all have to decide for ourselves what is good and bad, right and wrong. The best way to ensure we make the right choices is to ensure the procedures we’re employing are sound and reliable. We identify four broad categories of judgment to be targeted – cognitive, self-management, motivational and interpersonal. Specific factors within each category are further delineated, with a total of 14 factors to be discussed. For each, we will go through the reasons it generally leads to more morally reliable decision-making, how various thinkers have historically addressed the topic, and the insights of recent research that can offer new ways to promote good reasoning. The result is a wide-ranging survey that contains practical advice on how to make better choices. Finally, we relate this to the project of transhumanism and prudential decision-making. We argue that transhumans will employ better moral procedures like these. We also argue that the same virtues will enable us to take better control of our own lives, enhancing our responsibility and enabling us to lead better lives from the prudential perspective.

A copy of the article is here.

Google Sets Limits on Addiction Treatment Ads, Citing Safety

Michael Corkery
The New York Times
Originally published September 14, 2017

As drug addiction soars in the United States, a booming business of rehab centers has sprung up to treat the problem. And when drug addicts and their families search for help, they often turn to Google.

But prosecutors and health advocates have warned that many online searches are leading addicts to click on ads for rehab centers that are unfit to help them or, in some cases, endangering their lives.

This week, Google acknowledged the problem — and started restricting ads that come up when someone searches for addiction treatment on its site. “We found a number of misleading experiences among rehabilitation treatment centers that led to our decision,” Google spokeswoman Elisa Greene said in a statement on Thursday.

Google has taken similar steps to restrict advertisements only a few times before. Last year it limited ads for payday lenders, and in the past it created a verification system for locksmiths to prevent fraud.

In this case, the restrictions will limit a popular marketing tool in the $35 billion addiction treatment business, affecting thousands of small-time operators.

The article is here.

Tuesday, October 3, 2017

VA About To Scrap Ethics Law That Helps Safeguards Veterans From Predatory For-Profit Colleges

Adam Linehan
Task and Purpose
Originally posted October 2, 2017

An ethics law that prohibits Department of Veterans Affairs employees from receiving money or owning a stake in for-profit colleges that rake in millions in G.I. Bill tuition has “illogical and unintended consequences,” according to VA, which is pushing to suspend the 50-year-old statute.

But veteran advocacy groups say suspending the law would make it easier for the for-profit education industry to exploit its biggest cash cow: veterans. 

In a proposal published in the Federal Register on Sept. 14, VA claims that the statute — which, according to The New York Times, was enacted following a string of scandals involving the for-profit education industry — is redundant due to the other conflict-of-interest laws that apply to all federal employees and provide sufficient safeguards.

Critics of the proposal, however, say that the statute provides additional regulations that protect against abuse and provide more transparency. 

“The statute is one of many important bipartisan reforms Congress implemented to protect G.I. Bill benefits from waste, fraud, and abuse,” William Hubbard, Student Veterans of America’s vice president of government affairs, said in an email to Task & Purpose. “A thoughtful and robust public conservation should be had to ensure that the interests of student veterans is the top of the priority list.”

The article is here.

Editor's Note: The swamp continues to grow under the current administration.

Facts Don’t Change People’s Minds. Here’s What Does

Ozan Varol
Originally posted September 6, 2017

Here is an excerpt:

The mind doesn’t follow the facts. Facts, as John Adams put it, are stubborn things, but our minds are even more stubborn. Doubt isn’t always resolved in the face of facts for even the most enlightened among us, however credible and convincing those facts might be.

As a result of the well-documented confirmation bias, we tend to undervalue evidence that contradicts our beliefs and overvalue evidence that confirms them. We filter out inconvenient truths and arguments on the opposing side. As a result, our opinions solidify, and it becomes increasingly harder to disrupt established patterns of thinking.

We believe in alternative facts if they support our pre-existing beliefs. Aggressively mediocre corporate executives remain in office because we interpret the evidence to confirm the accuracy of our initial hiring decision. Doctors continue to preach the ills of dietary fat despite emerging research to the contrary.

If you have any doubts about the power of the confirmation bias, think back to the last time you Googled a question. Did you meticulously read each link to get a broad objective picture? Or did you simply skim through the links looking for the page that confirms what you already believed was true? And let’s face it, you’ll always find that page, especially if you’re willing to click through to Page 12 on the Google search results.

The article is here.

Monday, October 2, 2017

Cooperation in the Finitely Repeated Prisoner’s Dilemma

Matthew Embrey  Guillaume R. Fréchette  Sevgi Yuksel
The Quarterly Journal of Economics
Published: 26 August 2017


More than half a century after the first experiment on the finitely repeated prisoner’s dilemma, evidence on whether cooperation decreases with experience–as suggested by backward induction–remains inconclusive. This paper provides a meta-analysis of prior experimental research and reports the results of a new experiment to elucidate how cooperation varies with the environment in this canonical game. We describe forces that affect initial play (formation of cooperation) and unraveling (breakdown of cooperation). First, contrary to the backward induction prediction, the parameters of the repeated game have a significant effect on initial cooperation. We identify how these parameters impact the value of cooperation–as captured by the size of the basin of attraction of Always Defect–to account for an important part of this effect. Second, despite these initial differences, the evolution of behavior is consistent with the unraveling logic of backward induction for all parameter combinations. Importantly, despite the seemingly contradictory results across studies, this paper establishes a systematic pattern of behavior: subjects converge to use threshold strategies that conditionally cooperate until a threshold round; and conditional on establishing cooperation, the first defection round moves earlier with experience. Simulation results generated from a learning model estimated at the subject level provide insights into the long-term dynamics and the forces that slow down the unraveling of cooperation.

The paper is here.

The Role of a “Common Is Moral” Heuristic in the Stability and Change of Moral Norms

Lindström, B., Jangard, S., Selbing, I., & Olsson, A. (2017).
Journal of Experimental Psychology: General.


Moral norms are fundamental for virtually all social interactions, including cooperation. Moral norms develop and change, but the mechanisms underlying when, and how, such changes occur are not well-described by theories of moral psychology. We tested, and confirmed, the hypothesis that the commonness of an observed behavior consistently influences its moral status, which we refer to as the common is moral (CIM) heuristic. In 9 experiments, we used an experimental model of dynamic social interaction that manipulated the commonness of altruistic and selfish behaviors to examine the change of peoples’ moral judgments. We found that both altruistic and selfish behaviors were judged as more moral, and less deserving of punishment, when common than when rare, which could be explained by a classical formal model (social impact theory) of behavioral conformity. Furthermore, judgments of common versus rare behaviors were faster, indicating that they were computationally more efficient. Finally, we used agent-based computer simulations to investigate the endogenous population dynamics predicted to emerge if individuals use the CIM heuristic, and found that the CIM heuristic is sufficient for producing 2 hallmarks of real moral norms; stability and sudden changes. Our results demonstrate that commonness shapes our moral psychology through mechanisms similar to behavioral conformity with wide implications for understanding the stability and change of moral norms.

The article is here.

Sunday, October 1, 2017

Future Frankensteins: The Ethics of Genetic Intervention

Philip Kitcher
Los Angeles Review of Books
Originally posted September 4, 2017

Here is an excerpt:

The more serious argument perceives risks involved in germline interventions. Human knowledge is partial, and so perhaps we will fail to recognize some dire consequence of eliminating a particular sequence from the genomes of all members of our species. Of course, it is very hard to envisage what might go wrong — in the course of human evolution, many DNA sequences have arisen and disappeared. Moreover, in this instance, assuming a version of CRISPR-Cas9 sufficiently reliable to use on human beings, we could presumably undo whatever damage we had done. But, a skeptic may inquire, why take any risk at all? Surely somatic interventions will suffice. No need to tamper with the germline, since we can always modify the bodies of the unfortunate people afflicted with troublesome sequences.

Doudna and Sternberg point out, in a different context, one reason why this argument fails: some genes associated with disease act too early in development (in utero, for example). There is a second reason for failure. In a world in which people are regularly rescued through somatic interventions, the percentage of later generations carrying problematic sequences is likely to increase, with the consequence that ever more resources would have to be devoted to editing the genomes of individuals.  Human well-being might be more effectively promoted through a program of germline intervention, freeing those resources to help those who suffer in other ways. Once again, allowing editing of eggs and sperm seems to be the path of compassion. (The problems could be mitigated if genetic testing and in vitro fertilization were widely available and widely used, leaving somatic interventions as a last resort for those who slipped through the cracks. But extensive medical resources would still be required, and encouraging — or demanding — pre-natal testing and use of IVF would introduce a problematic and invasive form of eugenics.)

The article is here.