Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Monday, February 29, 2016

Mental health care 'is ruining lives'

By Nick Triggle
Health correspondent - BBC
Originally posted February 15, 2016

Inadequate and underfunded mental health care in England is leading to thousands of "tragic and unnecessary deaths" a review has found.

The report - by a taskforce set up by NHS England - said around three-quarters of people with mental health problems received no help at all.

It said more needs to be done to tackle rising suicide rates.

Ministers agreed with the findings, committing a £1bn extra a year by 2020 to treat a million more people.

This is to come out of the £8.4bn the government has promised to the health service during this Parliament and comes on top of extra money already announced for children's services.

Prime Minister David Cameron said the plan would help put "mental and physical healthcare on an equal footing".

The article is here.

Editorial Note: In spite of federal legislation in the United States insuring mental health parity, there are frequent reports of insurance companies not following the law.  Additionally, the mental health system in the US is chronically underfunded through insurance companies as well as local, state, and federal systems.

Iranians launch app to escape morality police

The Observers
Originally posted February 10, 2016

Iranian developers just launched a mobile app called "Gershad", which alerts users if the morality police are nearby.

In the Islamic Republic of Iran, the morality police, a unit of the National Police, are charged with insuring that Iranian citizens comply with so-called Islamic law. For example, morality officers have to make sure that women wear their veil correctly. If they see a young man and woman walking together, they can stop them and ask if they are married or from the same family. If the morality police suspect that they are an unmarried couple, they can reprimand them.

The new app is meant for young Iranians, especially young women who wear their veil loosely, pushed far back on their heads and showing their hair and face.

The article is here.

Sunday, February 28, 2016

When Ethical Leader Behavior Breaks Bad

How Ethical Leader Behavior Can Turn Abusive via Ego Depletion and Moral Licensing

Szu-Han (Joanna) Lin, Jingjing Ma, and Russell E. Johnson
Journal of Applied Psychology. 01/2016; DOI: 10.1037/apl0000098


The literature to date has predominantly focused on the benefits of ethical leader behaviors for recipients (e.g., employees and teams). Adopting an actor-centric perspective, in this study we examined whether exhibiting ethical leader behaviors may come at some cost to leaders. Drawing from ego depletion and moral licensing theories, we explored the potential challenges of ethical leader behavior for actors. Across 2 studies which employed multiwave designs that tracked behaviors over consecutive days, we found that leaders’ displays of ethical behavior were positively associated with increases in abusive behavior the following day. This association was mediated by increases in depletion and moral credits owing to their earlier displays of ethical behavior. These results suggest that attention is needed to balance the benefits of ethical leader behaviors for recipients against the challenges that such behaviors pose for actors, which include feelings of mental fatigue and psychological license and ultimately abusive interpersonal behaviors.

The article is here.

Saturday, February 27, 2016


Roskies, Adina, "Neuroethics", The Stanford Encyclopedia of Philosophy (Spring 2016 Edition), Edward N. Zalta (ed.), forthcoming

Neuroethics is an interdisciplinary research area that focuses on ethical issues raised by our increased and constantly improving understanding of the brain and our ability to monitor and influence it, as well as on ethical issues that emerge from our concomitant deepening understanding of the biological bases of agency and ethical decision-making.

1. The rise and scope of neuroethics

Neuroethics focuses on ethical issues raised by our continually improving understanding of the brain, and by consequent improvements in our ability to monitor and influence brain function. Significant attention to neuroethics can be traced to 2002, when the Dana Foundation organized a meeting of neuroscientists, ethicists, and other thinkers, entitled Neuroethics: Mapping the Field. A participant at that meeting, columnist and wordsmith William Safire, is often credited with introducing and establishing the meaning of the term “neuroethics”, defining it as
the examination of what is right and wrong, good and bad about the treatment of, perfection of, or unwelcome invasion of and worrisome manipulation of the human brain. (Marcus 2002: 5)
Others contend that the word “neuroethics” was in use prior to this (Illes 2003; Racine 2010), although all agree that these earlier uses did not employ it in a disciplinary sense, or to refer to the entirety of the ethical issues raised by neuroscience.

The entire entry is here.

Friday, February 26, 2016

Automated empathy allows doctors to check on patients daily

By Barbara Feder Ostrov
Kaiser Health News
Originally posted February 10, 2016

Here is an excerpt:

"Automating empathy" is a new healthcare buzzword for helping doctors stay in touch with patients before and after medical procedures — cheaply and with minimal effort from already overextended physicians.

It may sound like an oxymoron, but it's a powerful draw for hospitals and other health care providers scrambling to adjust to sweeping changes in how they're paid for the care they provide. Whether the emails actually trigger an empathetic connection or not, the idea of tailoring regular electronic communications to patients counts as an innovation in health care with potential to save money and improve quality.

Startups like HealthLoop are promising that their technologies will help patients stick to their treatment and recovery regimens, avoid a repeat hospital stay, and be more satisfied with their care. Similar companies in the "patient engagement" industry include Wellframe, Curaspan, and Infield Health.

The article is here.

The problem with cognitive and moral psychology

Massimo Pigliucci and K.D. Irani
Plato's Footprint
Originally published February 8, 2016

Here is an excerpt:

The norm of cooperation is again presupposed as the fundamental means for deciding which of our moral intuitions we should heed. When discussing the more stringent moral principles that Peter Singer, for instance, takes to be rationally required of us concerning our duties to distant strangers, Bloom dismisses them as unrealistic in the sense that no plausible evolutionary theory could yield such requirements for human beings.” But of course evolution is what provided us with the very limited moral instinct that Bloom himself concedes needs to be expanded through the use of reason! He seems to want to have it both ways: we ought to build on what nature gave us, so long as what we come up with is compatible with nature’s narrow demands. But why?

Let me quote once more from Shaw, who I think puts her finger precisely where the problem lies: “it is a fallacy to suggest that expertise in psychology, a descriptive natural science, can itself qualify someone to determine what is morally right and wrong. The underlying prescriptive moral standards are always presupposed antecedently to any psychological research … No psychologist has yet developed a method that can be substituted for moral reflection and reasoning, for employing our own intuitions and principles, weighing them against one another and judging as best we can. This is necessary labor for all of us. We cannot delegate it to higher authorities or replace it with handbooks. Humanly created suffering will continue to demand of us not simply new ‘technologies of behavior’ [to use B.F. Skinner’s phrase] but genuine moral understanding. We will certainly not find it in the recent books claiming the superior wisdom of psychology.”

Please note that Shaw isn’t saying that moral philosophers are the high priests to be called on, though I’m sure she would agree that those are the people that have thought longer and harder about the issues in question, and so should certainly get a place at the discussion table. She is saying that good reasoning in general, and good moral reasoning in particular, are something we all need to engage in, for the sake of our own lives and of society at large.

The entire article is here.

Thursday, February 25, 2016

Empathy is a moral force

Jamil Zaki
FORTHCOMING in Gray, K. & Graham, J. (Eds.), The Atlas of Moral Psychology

Here is an excerpt:

More recently, however, a growing countercurrent has questioned the utility of empathy in driving moral action. This argument builds on the broader idea that emotions provide powerful but noisy inputs to people’s moral calculus (Haidt, 2001). Affective reactions often tempt people to make judgments that are logically and morally indefensible. Such emotional static famously includes moral dumbfounding, under which people’s experience of disgust causes them to judge others’ actions as wrong when they have no rational basis for doing so (Cushman, Young, & Hauser, 2006). Emotion drives other irrational moral judgments, such as people’s tendency to privilege physical force (a “hot” factor) over more important dimensions such as harm when judging the moral status of an action (Greene, 2014; Greene et al., 2009). Even incidental, morally irrelevant feelings alter moral judgment, further damaging the credibility of emotion in guiding a sense of right and wrong. (Wheatley & Haidt, 2005).

In sum, although emotions play a powerful role in moral judgment, they need not play a useful role. Instead, capricious emotion-driven intuitions often attract people towards internally inconsistent and wrong-headed judgments. From a utilitarian perspective aimed at maximizing well being, these biases render emotion a fundamentally mistaken moral engine (cf. Greene, 2014).

Does this criticism apply to empathy? In many ways, it does. Like other affective states, empathy arises in response to evocative experiences, often in noisy ways that hamper objectivity. For instance, people experience more empathy, and thus moral obligation to help, in response to the visible suffering of others, as in the case of Baby Jessica described above. This empathy leads people to donate huge sums of money to help individuals whose stories they read about or see on television, while ignoring widespread misery that they could more efficaciously relieve (Genevsky, Västfjäll,
Slovic, & Knutson, 2013; Slovic, 2007; Small & Loewenstein, 2003). Empathy also collapses reliably when sufferers and would-be empathizers differ along dimensions of race, politics, age, or even meaningless de novo group assignments (Cikara, Bruneau, & Saxe, 2011; Zaki & Cikara, in press).

The chapter is here.

The practices of do-it-yourself brain stimulation: implications for ethical considerations and regulatory proposals

Anna Wexler
J Med Ethics doi:10.1136/medethics-2015-102704


Scientists and neuroethicists have recently drawn attention to the ethical and regulatory issues surrounding the do-it-yourself (DIY) brain stimulation community, which comprises individuals stimulating their own brains with transcranial direct current stimulation (tDCS) for self-improvement. However, to date, existing regulatory proposals and ethical discussions have been put forth without engaging those involved in the DIY tDCS community or attempting to understand the nature of their practices. I argue that to better contend with the growing ethical and safety concerns surrounding DIY tDCS, we need to understand the practices of the community. This study presents the results of a preliminary inquiry into the DIY tDCS community, with a focus on knowledge that is formed, shared and appropriated within it. I show that when making or acquiring a device, DIYers (as some members call themselves) produce a body of knowledge that is completely separate from that of the scientific community, and share it via online forums, blogs, videos and personal communications. However, when applying tDCS, DIYers draw heavily on existing scientific knowledge, posting links to academic journal articles and scientific resources and adopting the standardised electrode placement system used by scientists. Some DIYers co-opt scientific knowledge and modify it by creating their own manuals and guides based on published papers. Finally, I explore how DIYers cope with the methodological limitations inherent in self-experimentation. I conclude by discussing how a deeper understanding of the practices of DIY tDCS has important regulatory and ethical implications.

The article is here.

Wednesday, February 24, 2016

How Winning Leads to Cheating

By Jordana Cepelewicz
Scientific American
Originally published on February 2, 2016

We live, for better or for worse, in a competition-driven world. Rivalry powers our economy, sparks technological innovation and encourages academic discovery. But it also compels people to manipulate the system and commit crimes. Some figure it’s just easier—and even acceptable—to cheat.

But what if instead of examining how people behave in a competitive setting, we wanted to understand the consequences of competition on their everyday behavior? That is exactly what Amos Schurr, a business and management professor at Ben-Gurion University of the Negev, and Ilana Ritov, a psychologist at The Hebrew University of Jerusalem, discuss in a study in this week’s Proceedings of the National Academy of Sciences. “How can it be,” Schurr asks, “that successful, distinguished people—take [former New York State Gov.] Eliot Spitzer, who I think was a true civil servant when he started out his career with good intentions—turn corrupt? At the same time, you have other successful people, like Mother Theresa, who don’t become corrupt. What distinguishes between these two types of successful people?”

The article is here.

Ethical aspects of facial recognition systems in public places

Philip Brey
Journal of Information, Communication and Ethics in Society
Vol. 2 Iss: 2, pp.97 - 109

This essay examines ethical aspects of the use of facial recognition technology for surveillance purposes in public and semipublic areas, focusing particularly on the balance between security and privacy and civil liberties. As a case study, the FaceIt facial recognition engine of Identix Corporation will be analyzed, as well as its use in “Smart” video surveillance (CCTV) systems in city centers and airports. The ethical analysis will be based on a careful analysis of current facial recognition technology, of its use in Smart CCTV systems, and of the arguments used by proponents and opponents of such systems. It will be argued that Smart CCTV, which integrates video surveillance technology and biometric technology, faces ethical problems of error, function creep and privacy. In a concluding section on policy, it will be discussed whether such problems outweigh the security value of Smart CCTV in public places.

The article is here.

Tuesday, February 23, 2016

Do Emotions and Morality Mix?

By Lauren Cassani Davis
The Atlantic
Originally published February 5, 2016

Daily life is peppered with moral decisions. Some are so automatic that they fail to register—like holding the door for a mother struggling with a stroller, or resisting a passing urge to elbow the guy who cut you in line at Starbucks. Others chafe a little more, like deciding whether or not to give money to a figure rattling a cup of coins on a darkening evening commute. A desire to help, a fear of danger, and a cost-benefit analysis of the contents of my wallet; these gut reactions and reasoned arguments all swirl beneath conscious awareness.

While society urges people towards morally commendable choices with laws and police, and religious traditions stipulate good and bad through divine commands, scriptures, and sermons, the final say lies within each of our heads. Rational thinking, of course, plays a role in how we make moral decisions. But our moral compasses are also powerfully influenced by the fleeting forces of disgust, fondness, or fear.

Should subjective feelings matter when deciding right and wrong? Philosophers have debated this question for thousands of years. Some say absolutely: Emotions, like our love for our friends and family, are a crucial part of what give life meaning, and ought to play a guiding role in morality. Some say absolutely not: Cold, impartial, rational thinking is the only proper way to make a decision. Emotion versus reason—it’s one of the oldest and most epic standoffs we know.

The article is here.

American attitudes toward nudges

Janice Y. Jung and Barbara A. Mellers
Judgment and Decision Making
Vol. 11, No. 1, January 2016, pp. 62-74

To successfully select and implement nudges, policy makers need a psychological understanding of who opposes nudges, how they are perceived, and when alternative methods (e.g., forced choice) might work better. Using two representative samples, we examined four factors that influence U.S. attitudes toward nudges – types of nudges, individual dispositions, nudge perceptions, and nudge frames. Most nudges were supported, although opt-out defaults for organ donations were opposed in both samples. “System 1” nudges (e.g., defaults and sequential orderings) were viewed less favorably than “System 2” nudges (e.g., educational opportunities or reminders). System 1 nudges were perceived as more autonomy threatening, whereas System 2 nudges were viewed as more effective for better decision making and more necessary for changing behavior. People with greater empathetic concern tended to support both types of nudges and viewed them as the “right” kind of goals to have. Individualists opposed both types of nudges, and conservatives tended to oppose both types. Reactant people and those with a strong desire for control opposed System 1 nudges. To see whether framing could influence attitudes, we varied the description of the nudge in terms of the target (Personal vs. Societal) and the reference point for the nudge (Costs vs. Benefits). Empathetic people were more supportive when framing highlighted societal costs or benefits, and reactant people were more opposed to nudges when frames highlighted the personal costs of rejection.

The article is here.

Monday, February 22, 2016

Morality is a muscle. Get to the gym.

Pascal-Emmanuel Gobry
The Week
Originally published January 18, 2016

Here is an excerpt:

Take the furor over "trigger warnings" in college classes and textbooks. One side believes that in order to protect the sensitivities of some students, professors or writers should warn readers or students about some at the beginning of an article or course about controversial topics. Another side says that if someone can't handle rough material, then he can stop reading or step out of the room, and that trigger warnings are an unconscionable affront to freedom of thought. Interestingly, both schools clearly believe that there is one moral stance which takes the form of a rule that should be obeyed always and everywhere. Always and everywhere we should have trigger warnings to protect people's sensibilities, or always and everywhere we should not.

Both sides need a lecture in virtue ethics.

If I try to stretch my virtue of empathy, it doesn't seem at all absurd to me to imagine that, say, a young woman who has been raped might be made quite uncomfortable by a class discussion of rape in literature, and that this is something to which we should be sensitive. But the trigger warning people maybe should think more about the moral imperative to develop the virtue of courage, including intellectual courage. Then it seems to me that if you just put aside grand moral questions about freedom of inquiry, simple basic human courtesy would mean a professor would try to take account a trauma victim's sensibilities while teaching sensitive material, and students would understand that part of the goal of a college class is to challenge them. We don't need to debate universal moral values, we just need to be reminded to exercise virtue more.

The article is here.

Will Your Ethics Hold Up Under Pressure?

Ron Carucci
Originally published FEB 3, 2016

Here is an excerpt:

In an ironic appeal to self-interest, for which Haidt readily acknowledges the paradox, he says there are four important reasons “ethics pays.” First, there is the cost of reputation, which most analysts and experts acknowledge links closely to share price performance. Second, ethical organizations have lower costs of capital, as evidenced by Deutsche Bank’s commitment to focus on clients with higher ethical standards. Third, the white-hot war for talent, both recruiting and retaining top talent, takes a painful hit with an ethical scandal. Conversely, the best talent wants to associate with the best reputed companies. And finally, the astronomical cost of cleaning up an ethical mess can soar into the billions after shareholder losses, lawsuits, fines, and PR costs are added up. Still those aren’t the real reasons to focus on this, claims Haidt. The longer-term benefits to a world with greater ethical substance far outweigh the costs of cutting corners for short-term gains. Sadly, unethical choices have paid well for too many executives.

The article is here.

Sunday, February 21, 2016

Epistemology, Communication and Divine Command Theory

By John Danaher
Philosophical Disquisitions
Originally posted July 21, 2015

I have written about the epistemological objection to divine command theory (DCT) on a previous occasion. It goes a little something like this: According to proponents of the DCT, at least some moral statuses (like the fact that X is forbidden, or that X is bad) depend for their existence on God’s commands. In other words, without God’s commands those moral statuses would not exist. It would seem to follow that in order for anyone to know whether X is forbidden/bad (or whatever), they would need to have epistemic access to God’s commands. That is to say, they would need to know that God has commanded X to be forbidden/bad. The problem is that there is a certain class of non-believers — so-called ‘reasonable non-believers’ — who don’t violate any epistemic duties in their non-belief. Consequently, they lack epistemic access to God’s commands without being blameworthy for lacking this access. For them, X cannot be forbidden or bad.

This has been termed the ‘epistemological objection’ to DCT, and I will stick with that name throughout, but it may be a bit of a misnomer. This objection is not just about moral epistemology; it is also about moral ontology. It highlights the fact that at least some DCTs include a (seemingly) epistemic condition in their account of moral ontology. Consequently, if that condition is violated it implies that certain moral facts cease to exist (for at least some people). This is a subtle but important point: the epistemological objection does have ontological implications.

The blog post is here.

Saturday, February 20, 2016

Moral Nativism and Moral Psychology

By Paul Bloom
The Social Psychology of Morality 01/2012
DOI: 10.1037/13091-004


Moral psychology is both old and new. Old because moral thought has long been a central focus of theology and philosophy. Indeed, many of the theories that we explore today were proposed first by scholars such as Aristotle, Kant, and Hume. Young because the scientific study of morality—and, specifically, the study of what goes on in a person's head when making a moral judgment—has been a topic of serious inquiry only over the last couple of decades. Even now, it is just barely mainstream. This chapter is itself a combination of the old and the new. I am going to consider two broad questions that would have been entirely familiar to philosophers such as Aristotle, but are also the topic of considerable contemporary research and theorizing: (1) What is our natural human moral endowment? (2) To what extent are moral judgments the products of our emotions? I will have the most to say about the first question, and will review a body of empirical work that bears on it; much of this research is still in progress. The answer to the second question will be briefer and more tentative, and will draw in part upon this empirical work.

The article is here.

Friday, February 19, 2016

Mental health on college campuses: A look at the numbers

By Sarah Sabatke
USA Today
Originally published January 30, 2016

Approximately 42,773 Americans commit suicide every year, according to the American Foundation for Suicide Prevention, many of whom are college students.

The University of Pennsylvania, Tulane University, Appalachian State University and Yale University, among others, made national headlines in recent years after student suicides rocked their campus communities, highlighting a growing need for comprehensive mental healthcare on college campuses.

The page of statistics and infographics is here.

A Time to Fly and a Time to Die: Suicide Tourism and Assisted Dying in Australia Considered

Hadeel Al-Alosi
UNSW Law Research Paper No. 2016-04
January 8, 2016


Recently, a series of high-profile court cases have led the Director of Public Prosecution in the United Kingdom to publish a policy clarifying the exercise of its discretion in assisted suicide. Importantly, the experience in the United Kingdom serves as a timely reminder that Australia too should formulate its own guideline that detail how prosecutorial discretion will be exercised in cases of assisted suicide. This is especially given the fact that many Australian citizens are travelling to jurisdictions where assistance in dying is legal. Any policy should not, however, distract from addressing law reform on voluntary euthanasia. Australian legislators should be consulting with the public in order to represent the opinion of the majority. Nevertheless, any future policy and law reform implemented should provide adequate safeguards and be guided by the principle of individual autonomy.

The paper is here.

Thursday, February 18, 2016

Genetic editing is like playing God – and what’s wrong with that?

Johnjoe McFadden
The Guardian
Originally published February 2, 2016

The announcement that scientists are to be allowed to edit the DNA of human embryos will no doubt provoke an avalanche of warnings from opponents of genetic modification (GM) technology, who will warn that we are “playing God” with our genes.

The opponents are right. We are indeed playing God with our genes. But it is a good thing because God, nature or whatever we want to call the agencies that have made us, often get it wrong and it’s up to us to correct those mistakes.

Sadly, of the half a million or so babies that will be born in the UK this year, about 4% will carry a genetic or major birth defect that could result in an early death, or a debilitating disease that will cause misery for the child and their family. This research will eventually lead to technologies that could edit DNA in the same way that we can edit text – to correct the mistakes before the child’s development goes to its final draft. Its successful implementation could reduce, and eventually eliminate, the birth of babies with severe genetic diseases.

The article is here.

Scientists get 'gene editing' go-ahead

By James Gallagher
Health editor, BBC News website
Originally published February 1, 2016

UK scientists have been given the go-ahead by the fertility regulator to genetically modify human embryos.

The research will take place at the Francis Crick Institute in London and aims to provide a deeper understanding of the earliest moments of human life.

The experiments will take place in the first seven days after fertilisation and could explain what goes wrong in miscarriage.

It will be illegal for the scientists to implant the embryos into a woman.

Gene editing is the manipulation of our DNA - the blueprint of life.

In a world-first last year, scientists in China announced that they had carried out gene editing in human embryos to correct a gene that causes a blood disorder.

The field is attracting controversy, with some saying that altering the DNA of an embryo is a step too far and opens the door to designer babies.

The entire article is here.

Wednesday, February 17, 2016

Complaints about doctors rarely lead to formal discipline

By Holly Moore
CBC News 
Originally posted January 29, 2016

Nearly 8,000 Canadians filed a complaint about a physician last year, but on average only about 54 doctors were formally disciplined in each of the past 15 years. Of those complaints, just over half were determined to require no further action.

Historical data examined by CBC News found cases of 817 physicians that resulted in formal discipline, which is the only part of the disciplinary process for colleges of physicians and surgeons that is consistently made public across Canada.

"That number's not anywhere near what's actually happening. Those are the ones you could get to," said Ann Van Regan, a volunteer responder with TELL (Therapy Exploitation Link Line), a network of survivors of sex abuse by physicians and psychotherapists. "They say they're taking it seriously, but their actions show that they are not."

The article is here.

Prevalence and Characteristics of Physicians Prone to Malpractice Claims

D. M. Studdert, M. M. Bismark, M. M. Mello, H. Singh, and M. J. Spittal
N Engl J Med 2016; 374:354-362
January 28, 2016


The distribution of malpractice claims among physicians is not well understood. If claim-prone physicians account for a substantial share of all claims, the ability to reliably identify them at an early stage could guide efforts to improve care.


Using data from the National Practitioner Data Bank, we analyzed 66,426 claims paid against 54,099 physicians from 2005 through 2014. We calculated concentrations of claims among physicians. We used multivariable recurrent-event survival analysis to identify characteristics of physicians at high risk for recurrent claims and to quantify risk levels over time.


Approximately 1% of all physicians accounted for 32% of paid claims. Among physicians with paid claims, 84% incurred only one during the study period (accounting for 68% of all paid claims), 16% had at least two paid claims (accounting for 32% of the claims), and 4% had at least three paid claims (accounting for 12% of the claims). In adjusted analyses, the risk of recurrence increased with the number of previous paid claims. For example, as compared with physicians who had one previous paid claim, the 2160 physicians who had three paid claims had three times the risk of incurring another (hazard ratio, 3.11; 95% confidence interval [CI], 2.84 to 3.41); this corresponded in absolute terms to a 24% chance (95% CI, 22 to 26) of another paid claim within 2 years. Risks of recurrence also varied widely according to specialty — for example, the risk among neurosurgeons was four
times as great as the risk among psychiatrists.


Over a recent 10-year period, a small number of physicians with distinctive characteristics accounted for a disproportionately large number of paid malpractice claims.

The article is here.

Tuesday, February 16, 2016

Why You Should Stop Using the Phrase ‘the Mentally Ill’

By Tanya Basu
New York Magazine
Originally published February 2, 2016

Here is an excerpt:

What’s most surprising is the reaction that counselors have when the phrase “the mentally ill” is used: They’re more likely to believe that those suffering from mental illness should be controlled and isolated from the rest of the community. That's pretty surprising, given that these counselors are perhaps the ones most likely to be aware of the special needs and varying differences in diagnoses of the group.

Counselors also showed the largest differences in how intolerant they were based on the language, which boosted the researchers’ belief that simply changing language is important in not only understanding people who suffer from mental illness but also helping them adjust and cope. “Even counselors who work every day with people who have mental illness can be affected by language,” Granello said in a press release. “They need to be aware of how language might influence their decision-making when they work with clients.”

The entire article is here.

From Good Institutions to Good Norms: Top-Down Incentives to Cooperate Foster Prosociality But Not Norm Enforcement

Michael N Stagnaro, Antonio A. Arechar, & David G. Rand
Social Science Research Network


What makes people willing to pay costs to help others, and to punish others’ selfishness? And why does the extent of such behaviors vary markedly across cultures? To shed light on these questions, we explore the role of formal institutions in shaping individuals’ prosociality and punishment. In Study 1 (N=707), we found that the quality of the institutions that participants were exposed to in daily life was positively associated with giving in a Dictator Game, but had little relationship with punishment in a Third-Party Punishment Game. In Study 2 (N=516), we investigated causality by experimentally manipulating institutional quality using a centralized punishment institution applied to a repeated Public Goods Game. Consistent with Study 1’s correlational results, we found that high institutional quality led to significantly more prosociality in a subsequent Dictator Game, but did not have a significant overall effect on subsequent punishment. Thus we present convergent evidence that the quality of institutions one is exposed to “spills over” to affect subsequent prosociality, but not punishment. These findings support a theory of social heuristics, suggest boundary conditions on spillover effects of cooperation, and demonstrate the power of effective institutions for instilling habits of virtue and creating cultures of cooperation.

The article is here.

Monday, February 15, 2016

If You’re Loyal to a Group, Does It Compromise Your Ethics?

By Francesca Gino
Harvard Business Review
Originally posted January 06, 2016

Here are two excerpts:

Most of us feel loyalty, whether to our clan, our comrades, an organization, or a cause. These loyalties are often important aspects of our social identity. Once a necessity for survival and propagation of the species, loyalty to one’s in-group is deeply rooted in human evolution.

But the incidents of wrongdoing that capture the headlines make it seem like loyalty is all too often a bad thing, corrupting many aspects of our personal and professional lives. My recent research, conducted in collaboration with Angus Hildreth of the University of California, Berkeley and Max Bazerman of Harvard Business School, suggests that this concern about loyalty is largely misplaced. In fact, we found loyalty to a group can increase, rather than decrease, honest behavior.


As our research shows, loyalty can be a driver of good behavior, but when competition among groups is high, it can lead us to behave unethically. When we are part of a group of loyal members, traits associated with loyalty — such as honor, honesty, and integrity — are very salient in our minds. But when loyalty seems to demand a different type of goal, such as competing with other groups and winning at any cost, behaving ethically becomes a less important goal.

The article is here.

When Deliberation Isn’t Smart

By Adam Bear and David Rand
Originally published January 25, 2016

Cooperation is essential for successful organizations. But cooperating often requires people to put others’ welfare ahead of their own. In this post, we discuss recent research on cooperation that applies the “Thinking, fast and slow” logic of intuition versus deliberation. We explain why people sometimes (but not always) cooperate in situations where it’s not in their self-interest to do so, and show how properly designed policies can build “habits of virtue” that create a culture of cooperation. TL;DR summary: intuition favors behaviors that are typically optimal, so institutions that make cooperation typically advantageous lead people to adopt cooperation as their intuitive default; this default then “spills over” into settings where it’s not actually individually advantageous to cooperate.

Life is full of opportunities to make personal sacrifices on behalf others, and we often rise to the occasion. We do favors for co-workers and friends, give money to charity, donate blood, and engage in a host of other cooperative endeavors. Sometimes, these nice deeds are reciprocated (like when we help out a friend, and she helps us with something in return). Other times, however, we pay a cost and get little in return (like when we give money to a homeless person whom we’ll never encounter again).

Although you might not realize it, nowhere is the importance of cooperation more apparent than in the workplace. If your boss is watching you, you’d probably be wise to be a team player and cooperate with your co-workers, since doing so will enhance your reputation and might even get you a promotion down the road. In other instances, though, you might get no recognition from, say, helping out a fellow employee who needs assistance meeting a deadline, or who calls out sick.

The article is here.

Sunday, February 14, 2016

Why people fall for pseudoscience

By Sian Townson
The Guardian
Originally published January 26, 2016

Pseudoscience is everywhere – on the back of your shampoo bottle, on the ads that pop up in your Facebook feed, and most of all in the Daily Mail. Bold statements in multi-syllabic scientific jargon give the false impression that they’re supported by laboratory research and hard facts.

Magnetic wristbands improve your sporting performance, carbs make you fat, and just about everything gives you cancer.

Of course, we scientists accept that sometimes people believe things we don’t agree with. That’s fine. Science is full of people who disagree with one another . If we all thought exactly the same way, we could retire and call the status quo truth.

But when people think snake oil is backed up by science, we have to challenge that. So why is it so hard?

The article is here.

Saturday, February 13, 2016

Pentagon Wants Psychologists to End Ban on Interrogation Role

By James Risen
The New York Times
Originally posted on January 24, 2016

The Pentagon has asked the American Psychological Association to reconsider its ban on the involvement of psychologists in national security interrogations at the Guantánamo Bay prison and other facilities.

The Defense Department reduced its use of psychologists at Guantánamo in late 2015 in response to the policy approved by the association last summer.

But in a letter and accompanying memo to association officials this month, Brad Carson, the acting principal deputy secretary of defense for personnel and readiness, asked that the group, the nation’s largest professional organization for psychologists, revisit its “blanket prohibition.”

Although “the Department of Defense understands the desire of the American psychology profession to make a strong statement regarding reports about the role of former military psychologists more than a dozen years ago, the issue now is to apply the lessons learned to guide future conduct,” Mr. Carson wrote.

The article is here.

Friday, February 12, 2016

Harm is all you need? Best interests and disputes about parental decision-making

by Giles Birchley
J Med Ethics 2016;42:111-115


A growing number of bioethics papers endorse the harm threshold when judging whether to override parental decisions. Among other claims, these papers argue that the harm threshold is easily understood by lay and professional audiences and correctly conforms to societal expectations of parents in regard to their children. English law contains a harm threshold which mediates the use of the best interests test in cases where a child may be removed from her parents. Using Diekema's seminal paper as an example, this paper explores the proposed workings of the harm threshold. I use examples from the practical use of the harm threshold in English law to argue that the harm threshold is an inadequate answer to the indeterminacy of the best interests test. I detail two criticisms: First, the harm standard has evaluative overtones and judges are loath to employ it where parental behaviour is misguided but they wish to treat parents sympathetically. Thus, by focusing only on ‘substandard’ parenting, harm is problematic where the parental attempts to benefit their child are misguided or wrong, such as in disputes about withdrawal of medical treatment. Second, when harm is used in genuine dilemmas, court judgments offer different answers to similar cases. This level of indeterminacy suggests that, in practice, the operation of the harm threshold would be indistinguishable from best interests. Since indeterminacy appears to be the greatest problem in elucidating what is best, bioethicists should concentrate on discovering the values that inform best interests.

The article is here.

Growing use of neurobiological evidence in criminal trials, new study finds

By Emily Underwood
Originally posted January 21, 2016

Here is an excerpt:

Overall, the new study suggests that neurobiological evidence has improved the U.S. criminal justice system “through better determinations of competence and considerations about the role of punishment,” says Judy Illes, a neuroscientist at the University of British Columbia, Vancouver, in Canada. That is not Farahany’s interpretation, however. With a few notable exceptions, use of neurobiological evidence in courtrooms “continues to be haphazard, ad hoc, and often ill conceived,” she and her colleagues write. Lawyers rarely heed scientists’ cautions “that the neurobiological evidence at issue is weak, particularly for making claims about individuals rather than studying between-group differences,” they add.

The article is here.

Thursday, February 11, 2016

An American Psychiatric Horror Story

By Todd Essig
Originally posted January 24, 2016

Here is an excerpt:

In order to say denying care is a good thing Bennett has to denigrate the value of the care provided. He wants readers to believe weekly psychotherapy, or whatever frequency and duration a patient and therapist determine is in the patient’s best interests, has “limited potential to heal and protect.” He concludes this because, as he writes, “Objectively, there’s little evidence that the treatment relationship is as healing, powerful or anchoring as we and our patients wish it would be…”

That is such an absurd pretzel I have to resist the urge to turn on my caps lock. Of course treatment is NEVER as amazing as people wish it would be. That’s what makes them wishes and not plans. His is a meaningless statement because not gratifying wishes for transcendent change is not an outcome measure. It is an inevitability. But that’s the reason he says therapy has limited potential.

And I should point out, every (EVERY!) medical intervention has limits. Remember the old joke about the patient who gets an unequivocal yes after asking his surgeon if he’ll be able to play the piano after the life-saving operation only to say “that’s great, I can’t play now!” Well, according to Bennett that would be reason enough for an insurance company to deny coverage for the life-saving operation.

The article is here.

‘Is this knowledge mine and nobody else's? I don't feel that.’

Patient views about consent, confidentiality and information-sharing in genetic medicine

Sandi Dheensa, Angela Fenwick, and Anneke Lucassen
J Med Ethics doi:10.1136/medethics-2015-102781


In genetic medicine, a patient's diagnosis can mean their family members are also at risk, raising a question about how consent and confidentiality should function in clinical genetics. This question is particularly pressing when it is unclear whether a patient has shared information. Conventionally, healthcare professionals view confidentiality at an individual level and ‘disclosure without consent’ as the exception, not the rule. The relational joint account model, by contrast, conceptualises genetic information as confidential at the familial level and encourages professionals to take disclosure as the default position. In this study, we interviewed 33 patients about consent and confidentiality and analysed data thematically. Our first theme showed that although participants thought of certain aspects of genetic conditions—for example, the way they affect day-to-day health—as somewhat personal, they perceived genetic information—for example, the mutation in isolation—as familial. Most thought these elements were separable and thought family members had a right to know the latter, identifying a broad range of harms that would justify disclosure. Our second theme illustrated that participants nonetheless had some concerns about what, if any, implications there would be of professionals treating such information as familial and they emphasised the importance of being informed about the way their information would be shared. Based on these results, we recommend that professionals take disclosure as the default position, but make clear that they will treat genetic information as familial during initial consultations and address any concerns therein.

The article is here.

Wednesday, February 10, 2016

End-of-life care in U.S. not as costly as in Canada

By Jessica McDonald
Originally posted January 10, 2016

The United States has a reputation for providing costly -- and often unwanted -- end-of-life care. But the first study to do an international comparison finds it's not as egregious as we thought.

Compared with patients in other developed nations, Americans diagnosed with cancer spend more time in the intensive care unit and get more chemotherapy in the last months of their lives.

But fewer patients are in the hospital when they die. And the overall bill, while high, isn't the steepest. That honor goes to Canada.

"We found that end-of-life care in the United States is not the worst in the world, and I think that surprises a lot of people," said Dr. Ezekiel Emanuel, a medical ethicist at the University of Pennsylvania.

The article is here.

The consequences of dishonesty

Scott S Wiltermuth, David T Newman, Medha Raj
Current Opinion in Psychology
Volume 6, December 2015, Pages 20–24

We review recent findings that illustrate that dishonesty yields a host of unexpected consequences. We propose that many of these newly-identified consequences stem from the deceiver choosing to privilege other values over honesty, and note that these values may relate to compassion, material gain, or the desire to maintain a positive self-concept. Furthermore, we argue that conflict between these values and honesty can be used to explain the unexpected consequences of dishonest behavior. We demonstrate that these consequences need not be negative, and discuss research that illustrates that dishonest behavior can help actors generate trust, attain a sense of achievement, and generate creative ideas. In addition, we discuss recently-identified negative consequences that can result from privileging other values over honesty.

• Dishonesty yields intriguing consequences that scholars have only recently discovered.
• These consequences may stem from actors privileging other values over honesty.
• Privileging other values over honesty can yield positive consequences.
• The valence of the consequence may depend on the value endorsed over honesty.

Tuesday, February 9, 2016

On the misguided pursuit of happiness and ethical decision making: The roles of focalism and the impact bias in unethical and selfish behavior

Laura J. Noval
Organizational Behavior and Human Decision Processes
Volume 133, March 2016, Pages 1–16


An important body of research in the field of behavioral ethics argues that individuals behave unethically and selfishly because they want to obtain desired outcomes, such as career advancement and monetary rewards. Concurrently, a large body of literature in social psychology has shown that the subjective value of an outcome is determined by its anticipated emotional impact. Such impact has been consistently found to be overestimated both in its intensity and in its duration (i.e. impact bias) due to focalism (i.e. excessive focus on the desired outcome). Across four empirical studies, this investigation demonstrates that reducing focalism and thereby attenuating the impact bias in regards to desired outcomes decreases people’s tendency to engage in both unethical and selfish behavior to obtain those outcomes.


• Individuals engage in unethical and selfish behavior to obtain desired outcomes, such as monetary or career rewards.
• The anticipated emotional impact of the outcomes individuals seek to obtain is overestimated (i.e. impact bias).
• The impact bias results from focalism (i.e. excessive focus on an outcome).
• In four studies, focalism and the impact bias about desired outcomes were experimentally reduced.
• The focalism reduction resulted in a decreased tendency of individuals to engage in unethical and selfish behavior.

The article is here.

Ethical dissonance, justifications, and moral behavior

Rachel Barkan, Shahar Ayal, and Dan Ariely
Current Opinion in Psychology
Volume 6, December 2015, Pages 157–161


Ethical dissonance is triggered by the inconsistency between the aspiration to uphold a moral self-image and the temptation to benefit from unethical behavior. In terms of a temporal distinction anticipated dissonance occurs before people commit a moral-violation. In contrast, experienced dissonance occurs after people realize they have violated their moral code. We review the psychological mechanisms and justifications people use to reduce ethical dissonance in order to benefit from wrongdoing and still feel moral. We then offer harnessing anticipated-dissonance to help people resist temptation, and utilize experienced-dissonance to prompt moral compensation and atonement. We argue that rather than viewing ethical dissonance as a threat to self-image, we should help people see it as the gate-keeper of their morality.


• Ethical dissonance represents the tension between moral-self and unethical behavior.
• Justifications reduce ethical dissonance, allowing to do wrong and feel moral.
• Ethical dissonance can be anticipated before, or experienced after, the violation.
• Effective moral interventions can harness ethical dissonance as a moral gate-keeper.

The article is here.

Monday, February 8, 2016

Episode 24: The Nudge in Ethics, Psychotherapy, and Public Policy

Nudge theory has gained popularity in behavioral science, mainly in the field of behavioral economics.  The theory broadly indicates that indirect suggestions or contextual changes can influence choices or compliance with healthy behaviors or decisions.  Nudge theory contrasts its approach with direct suggestions, instructions, and education.  In psychotherapy, we nudge patients frequently.  Sometimes we do it consciously, other times unconsciously.  Because of this potentially powerful influence over our clients, we must remain vigilant about our nudges in the form of soft paternalism or projecting our values onto our patients.  Psychologists must be mindful of the power imbalance in the psychotherapy relationship and our duty to respect client autonomy. 

John’s guest is Dr. Jennifer Blumenthal-Barby, Associate Professor of Medicine and Medical Ethics, Center for Medical Ethics and Health Policy, Baylor College of Medicine, located in Texas.

Click here for CE Credit for psychologists and other professionals

At the end of the podcast, the participants will be able to:
  1. Describe what “Nudge Theory” is;
  2. Explain how Nudge Theory applies to ethics in the psychotherapy relationship;
  3. Name two ways that psychologists can use nudge theory to promote healthy behaviors.


Blumenthal-Barby J.S., Burroughs H. (2012). Seeking better health care outcomes: the ethics of using the "nudge".   American Journal of Bioethics. Volume 12(2): 1-10.

Blumenthal-Barby, J.S. McCullough, L.B., Kreiger, H. and Coverdale, J.C. (2013). Methods of Influencing the Decisions of Psychiatric Patients: An Ethical Analysis. Harvard Review of Psychiatry, Volume 21 (5), 275-279.

DeAngelis, T. Coaxing Better Behavior. (2014). The Monitor on Psychology. Volume 45(11): 62.

Knapp, S. and Gavazzi, J. (2014). Is it Ever Ethical to Lie to a Patient? The Pennsylvania Psychologist.

Barkan, R. Ayal, S. and Ariely, D. (2010). Ethical dissonance, justifications, and moral behavior. Current Opinion in Psychology, Volume 6, December 2015, 157-161.

Sunstein, C. R. Fifty Shades of Manipulation. (2015). Journal of Behavioral Marketing.

Sunstein, C. R. The Ethics of Nudging. (2014). Social Science Research Network.

Sunday, February 7, 2016

Tolerable Risks? Physicians and Youth Tackle Football

Kathleen E. Bachynski, M.P.H.
N Engl J Med 2016; 374:405-407

At least 11 U.S. high-school athletes died playing football during the fall 2015 season. Their deaths attracted widespread media attention and provided fodder for ongoing debates over the safety of youth tackle football. In October 2015, the American Academy of Pediatrics (AAP) issued its first policy statement directly addressing tackling in football. The organization’s Council on Sports Medicine and Fitness conducted a review of the literature on tackling and football-related injuries and evaluated the potential effects of limiting or delaying tackling on injury risk. It found that concussions and catastrophic injuries are particularly associated with tackling and that eliminating tackling from football would probably reduce the incidence of concussions, severe injuries, catastrophic injuries, and overall injuries.

But rather than recommend that tackling be eliminated in youth football, the AAP committee primarily proposed enhancing adult supervision of the sport. It recommended that officials enforce the rules of the game, that coaches teach young players proper tackling techniques, that physical therapists and other specialists help players strengthen their neck muscles to prevent concussions, and that games and practices be supervised by certified athletic trainers. There is no systematic evidence that tackling techniques believed to be safer, such as the “heads-up” approach promoted by USA Football (amateur football’s national governing body), reduce the incidence of concussions in young athletes. Consequently, the AAP statement acknowledged the need for further study of these approaches. The policy statement also encouraged the expansion of nontackling leagues as another option for young players.

The article is here.

Saturday, February 6, 2016

Understanding Responses to Moral Dilemmas

Deontological Inclinations, Utilitarian Inclinations, and General Action Tendencies

Bertram Gawronski, Paul Conway, Joel B. Armstrong, Rebecca Friesdorf, and Mandy Hütter
In: J. P. Forgas, L. Jussim, & P. A. M. Van Lange (Eds.). (2016). Social psychology of morality. New York, NY: Psychology Press.


For  centuries,  societies  have  wrestled  with  the  question  of  how  to  balance  the  rights of the individual versus the greater good (see Forgas, Jussim, & Van Lange, this volume); is it acceptable to ignore a person’s rights in order to increase the overall well-being of a larger number of people? The contentious nature of this issue is reflected in many contemporary examples, including debates about whether it is legitimate to cause harm in order to protect societies against threats (e.g., shooting an abducted passenger plane to prevent a terrorist attack) and whether it is acceptable to refuse life-saving support for some people in order to protect the well-being  of  many  others  (e.g.,  refusing  the  return  of  American  citizens  who  became infected with Ebola in Africa for treatment in the US). These issues have captured the attention of social scientists, politicians, philosophers, lawmakers, and citizens alike, partly because they involve a conflict between two moral principles.

The  first  principle,  often  associated  with  the  moral  philosophy  of  Immanuel  Kant, emphasizes the irrevocable universality of rights and duties. According to the principle of deontology, the moral status of an action is derived from its consistency with context-independent norms (norm-based morality). From this perspective, violations of moral norms are unacceptable irrespective of the anticipated outcomes (e.g.,  shooting  an  abducted  passenger  plane  is  always  immoral  because it violates  the moral norm not to kill others). The second principle, often associated with the moral philosophy of John Stuart Mill, emphasizes the greater good. According to the principle of utilitarianism, the moral status of an action depends on its outcomes, more  specifically  its consequences  for  overall  well-being  (outcome-based  morality).

Friday, February 5, 2016

Artificial intelligence: Who’s regulating the robots?

By Selina Chignall
iPolitics Canada
Originally published Jan 13, 2016

In 2014, famed theoretical physicist Stephen Hawking warned ominously that “the development of full artificial intelligence could spell the end of the human race.”

While the prospect of humanity being taken over by super-intelligent robots may seem less fanciful that it once was, the more immediate threat, say AI experts, is the lack of mobilization by governments to deal with the policy implications of AI.

John Danaher, an assistant professor of law at the National University of Ireland, Galway, who researches and blogs on AI and the relationship between humans and technology, predicts that AI will affect our lives incrementally.

“Indeed, they are already doing so. We rely on AI systems all the time, many times in ways we do not fully appreciate,” Danaher said.

With this technology already a part of our daily lives, or soon to be — with driverless cars, robots and machines helping doctors in the medical profession — there has been little attention paid to how and should it be regulated.

The article is here.

Lawyer told police of client's alleged plot after speaking with ethics hotline

By Debra Cassens Weiss
American Bar Association Journal
Originally published January 12,2016

A Pennsylvania lawyer revealed his client’s alleged plot “take back” the home of his ex-girlfriend using an AR-15 rifle and body armor after consulting with the state bar’s ethics hotline, police say.

Revelations by the lawyer, Seamus Dubbs of York, likely saved lives, police say. The York Daily Record has a story.

The client, Howard Timothy Cofflin Jr., told police after his arrest that he planned to kill the ex-girlfriend as well as anyone who tried to stop him, according to court records cited by the York Daily Report. Charging documents said he planned to decapitate the ex-girlfriend and to go to war with state police, Pennlive.com reports. He also had a plan to bomb state police barracks, police said.

The article is here.

Thursday, February 4, 2016

French drug trial leaves one brain dead and five critically ill

By Angelique Chrisafis
The Guardian
Originally published January 15, 2916

Here is an excerpt:

Touraine said the study was a phase one clinical trial, in which healthy volunteers take the medication to “evaluate the safety of its use, tolerance and pharmacological profile of the molecule”.

Medical trials typically have three phases to assess a new drug or device for safety and effectiveness. Phase one entails a small group of volunteers and focuses only on safety. Phase two and three are progressively larger trials to assess the drug’s effectiveness, although safety remains paramount.

Testing had already been carried out on animals, including chimpanzees, starting in July, Touraine said.

Bial said it was committed to ensuring the wellbeing of test participants and was working with authorities to discover the cause of the injuries, adding that the clinical trial had been approved by French regulators.

The story is here.

Empathy can be learned by sharing positive experiences

Yahoo News
Originally published December 28, 2015

A study by researchers at the University of Zurich indicates that empathy towards strangers can be learned and that positive experiences with others influence empathic brain responses.

According to a recent Swiss study, we are all capable of feeling empathy towards strangers. By repeating positive experiences with strangers, our brain learns and develops empathic responses.

The article is here.

Wednesday, February 3, 2016

What Make Us Cheat? Experiment 3

by Simon Oxenham
Originally published January 13, 2016

Dan Ariely, the psychologist who popularised behavioral economics, has made a fascinating documentary exploring what makes us dishonest. I’ve just finished watching it and it’s something of a masterpiece of psychological storytelling, delving deep into contemporary tales of dishonesty, and supporting its narrative with cunningly designed experiments that have been neatly reconstructed for the film camera.

Social Norms

Whether or not we cheat has less to do with the probability of being caught, than whether or not we feel cheating is socially acceptable within our social circle.

The article is here.

Note: There is more research to show that those who witness unethical behavior in the workplace are more likely to engage in that unethical behavior if there are no consequences.

Two Distinct Moral Mechanisms for Ascribing and Denying Intentionality

L. Ngo, M. Kelly, C. G. Coutlee, R. M. Carter, W. Sinnott-Armstrong & S. A. Huettel
Scientific Reports 5, Article number: 17390 (2015)


Philosophers and legal scholars have long theorized about how intentionality serves as a critical input for morality and culpability, but the emerging field of experimental philosophy has revealed a puzzling asymmetry. People judge actions leading to negative consequences as being more intentional than those leading to positive ones. The implications of this asymmetry remain unclear because there is no consensus regarding the underlying mechanism. Based on converging behavioral and neural evidence, we demonstrate that there is no single underlying mechanism. Instead, two distinct mechanisms together generate the asymmetry. Emotion drives ascriptions of intentionality for negative consequences, while the consideration of statistical norms leads to the denial of intentionality for positive consequences. We employ this novel two-mechanism model to illustrate that morality can paradoxically shape judgments of intentionality. This is consequential for mens rea in legal practice and arguments in moral philosophy pertaining to terror bombing, abortion, and euthanasia among others.

The article is here.

Tuesday, February 2, 2016

The spreading of misinformation online

M. Del Vicarioa , A. Bessib , F. Zolloa , F. Petronic , A. Scalaa, G. Caldarellia, H. E. Stanley, and W. Quattrociocchia
Proceedings of the National Academy of Sciences


The wide availability of user-provided content in online social media facilitates the aggregation of people around common interests, worldviews, and narratives. However, the World Wide Web (WWW) also allows for the rapid dissemination of unsubstantiated rumors and conspiracy theories that often elicit rapid, large, but naive social responses such as the recent case of Jade Helm 15––where a simple military exercise turned out to be perceived as the beginning of a new civil war in the United States. In this work, we address the determinants governing misinformation spreading through a thorough quantitative analysis. In particular, we focus on how Facebook users consume information related to two distinct narratives: scientific and conspiracy news. We find that, although consumers of scientific and conspiracy stories present similar consumption patterns with respect to content, cascade dynamics differ. Selective exposure to content is the primary driver of content diffusion and generates the formation of homogeneous clusters, i.e., “echo chambers.” Indeed, homogeneity appears to be the primary driver for the diffusion of contents and each echo chamber has its own cascade dynamics. Finally, we introduce a data-driven percolation model mimicking rumor spreading and we show that homogeneity and polarization are the main determinants for predicting cascades’ size.

The article is here.

What Makes Us Cheat? Experiment 2

by Simon Oxenham
Originally published January 13, 2016

Dan Ariely, the psychologist who popularised behavioral economics, has made a fascinating documentary exploring what makes us dishonest. I’ve just finished watching it and it’s something of a masterpiece of psychological storytelling, delving deep into contemporary tales of dishonesty, and supporting its narrative with cunningly designed experiments that have been neatly reconstructed for the film camera.


The article is here.

Monday, February 1, 2016

What Makes Us Cheat? Experiment 1

by Simon Oxenham
Originally published January 13, 2016

Dan Ariely, the psychologist who popularised behavioral economics, has made a fascinating documentary exploring what makes us dishonest. I’ve just finished watching it and it’s something of a masterpiece of psychological storytelling, delving deep into contemporary tales of dishonesty, and supporting its narrative with cunningly designed experiments that have been neatly reconstructed for the film camera.

Matrix Experiments and Big Cheaters vs Little Cheaters

The article is here.

How You Justified 10 Lies (or Didn’t)

By Gerald Dworkin
The New York Times - The Stone
Originally published January 14, 2016

Thanks to Stone readers who submitted a response — there were more than 10,000 — to my article, “Are These 10 Lies Justified.” Judging from the number of replies, the task of determining when it is or is not acceptable to lie is obviously one that many people have faced in their own lives. Many of you gave your own examples of lies told and why you believed they were or were not justified. It was heartening to find so many people prepared to reason thoughtfully about important moral issues.

With few exceptions, readers disagreed with me about the legitimacy of one or more of the lies, all of which I believe are justified. (You can revisit the original article, here.)

The results, as well as the original scenarios that you were asked to respond to, are below.

The article is here.