Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Monday, August 31, 2015

The What and Why of Self-Deception

Zoë Chance and Michael I. Norton
Current Opinion in Psychology
Available online 3 August 2015

Scholars from many disciplines have investigated self-deception, but both defining self-deception and establishing its possible benefits have been a matter of heated debate – a debate impoverished by a relative lack of empirical research. Drawing on recent research, we first classify three distinct definitions of self-deception, ranging from a view that self-deception is synonymous with positive illusions to a more stringent view that self-deception requires the presence of simultaneous conflicting beliefs. We then review recent research on the possible benefits of self-deception, identifying three adaptive functions: deceiving others, social status, and psychological benefits. We suggest potential directions for future research.

The nature and definition of self-deception remains open to debate. Philosophers have questioned whether – and how – self-deception is possible; evolutionary theorists have conjectured that self-deception may – or must – be adaptive. Until recently, there was little evidence for either the existence or processes of self-deception; indeed, Robert Trivers wrote that research on self-deception is still in its infancy. In recent years, however, empirical research on self-deception has been gaining traction in social psychology and economics, providing much-needed evidence and shedding light on the psychology of self-deception. We first classify competing definitions of self-deception, then review recent research supporting three distinct advantages of self-deception: improved success in deceiving others, social status, and psychological benefits.

The entire article is here.

Note to Psychologists: Psychologists engage in self-deception in psychotherapy.  Psychologists typically judge psychotherapy sessions as having been more beneficial than patients.  Self-deception may lead to clinical missteps and errors in judgment, both clinical and ethical.

The Moral Code

By Nayef Al-Rodhan
Foreign Affairs
Originally published August 12, 2015

Here is an excerpt:

Today, robotics requires a much more nuanced moral code than Asimov’s “three laws.” Robots will be deployed in more complex situations that require spontaneous choices. The inevitable next step, therefore, would seem to be the design of “artificial moral agents,” a term for intelligent systems endowed with moral reasoning that are able to interact with humans as partners. In contrast with software programs, which function as tools, artificial agents have various degrees of autonomy.

However, robot morality is not simply a binary variable. In their seminal work Moral Machines, Yale’s Wendell Wallach and Indiana University’s Colin Allen analyze different gradations of the ethical sensitivity of robots. They distinguish between operational morality and functional morality. Operational morality refers to situations and possible responses that have been entirely anticipated and precoded by the designer of the robot system. This could include the profiling of an enemy combatant by age or physical appearance.

The entire article is here.

Sunday, August 30, 2015

Inside the Monkey Lab: The Ethics of Testing on Animals

By Miriam Wells
Vice News
July 7, 2015

"Of course it's pitiful for the monkeys. Everyone feels the same — you see it and you don't want it. But the point is if you want something different then you have to make something different. It doesn't happen overnight."

Speaking to VICE News, Jeffrey Bajramovic, a scientist from the Biomedical Primate Research Centre (BPRC) in Holland, was refreshingly honest. What happens to the monkeys tested on inside the center — a not for profit laboratory which is the largest facility of its kind in Europe, housing around 1,500 primates — is horrible. Those sent for experimentation suffer pain and distress, sometimes severe, in studies that sometimes last for months, before ending their lives on an autopsy table.

But the tests they undertake contribute to the understanding of and development of vaccines and treatments for some of the world's most deadly and prevalent diseases. And in a grim paradox, as Bajramovic pointed out, the captive primates are also contributing to the development of alternative research methods that scientists can use so that ultimately, they don't have to test on animals at all.

It's a messy and emotional ethical dilemma that VICE News came face to face with when we gained rare access to the BPRC to see just what happens inside.

The entire article is here.

WARNING: There is a graphic and disturbing (to me) video embedded within the article.

Saturday, August 29, 2015

Weird Minds Might Destabilize Human Ethics

By Eric Schwitzgebel
The Splintered Mind Blog
Originally published August 13, 2015

Here is an excerpt:

For physics and biology, we have pretty good scientific theories by which to correct our intuitive judgments, so it's no problem if we leave ordinary judgment behind in such matters. However, it's not clear that we have, or will have, such a replacement in ethics. There are, of course, ambitious ethical theories -- "maximize happiness", "act on that maxim that you can at the same time will to be a universal law" -- but the development and adjudication of such theories depends, and might inevitably depend, on our intuitive judgments about such cases. It's because we intuitively or pre-theoretically think we shouldn't give all our cookies to the utility monster or kill ourselves to tile the solar system with hedonium that we reject the straightforward extension of utilitarian happiness-maximizing theory to such cases and reach for a different solution. But if our commonplace ethical judgments about such cases are not to be trusted, because these cases are too far beyond what we can reasonably expect human moral intuition to handle well, what then? Maybe we should kill ourselves to tile the solar system with hedonium (the minimal collection of atoms capable of feeling pleasure), and we're just unable to appreciate this fact with moral theories shaped for our limited ancestral environments?

The entire blog post is here.

Friday, August 28, 2015

Ethical Blind Spots: Explaining Unintentional Unethical Behavior

Sezer, O., F. Gino, and M. H. Bazerman. "Ethical Blind Spots: Explaining Unintentional Unethical Behavior." Current Opinion in Psychology (forthcoming).


People view themselves as more ethical, fair, and objective than others, yet often act against their moral compass. This paper reviews recent research on unintentional unethical behavior and provides an overview of the conditions under which ethical blind spots lead good people to cross ethical boundaries. First, we present the psychological processes that cause individuals to behave unethically without their own awareness. Next, we examine the conditions that lead people to fail to accurately assess others' unethical behavior. We argue that future research needs to move beyond a descriptive framework and focus on finding empirically testable strategies to mitigate unethical behavior.

The article can be found here.

Deconstructing intent to reconstruct morality

Fiery Cushman
Current Opinion in Psychology
Volume 6, December 2015, Pages 97–103


• Mental state inference is a foundational element of moral judgment.
• Its influence is usually captured by contrasting intentional and accidental harm.
• The folk theory of intentional action comprises many distinct elements.
• Moral judgment shows nuanced sensitivity to these constituent elements.
• Future research will profit from attention to the constituents of intentional action.

Mental state representations are a crucial input to human moral judgment. This fact is often summarized by saying that we restrict moral condemnation to ‘intentional’ harms. This simple description is the beginning of a theory, however, not the end of one. There is rich internal structure to the folk concept of intentional action, which comprises a series of causal relations between mental states, actions and states of affairs in the world. Moral judgment shows nuanced patterns of sensitivity to all three of these elements: mental states (like beliefs and desires), the actions that a person performs, and the consequences of those actions. Deconstructing intentional action into its elemental fragments will enable future theories to reconstruct our understanding of moral judgment.

The entire article is here.

Thursday, August 27, 2015

Steven Pinker is right about biotech and wrong about bioethics

Bill Gardner
The Incidental Economist
Originally published August 7, 2015

Here is an excerpt:

First, even by newspaper op-ed standards this is lazily argued. Pinker attributes a host of opinions to bioethicists without quoting any bioethicist. He does not cite any cases to document that bioethicists’ concerns about long term consequences have impeded research and caused harms. There likely are such cases, but he writes as if they are common. I served for years on the University of Pittsburgh IRB. For better or worse, the long term risks of biomedical research were never even discussed.

Worse, Pinker brackets “dignity” and “social justice”* in sneer quotes, as if it were self-evident that affronts to these values do not fall into the class of “identifiable harms” and as if these concerns can be dismissed without any actual argument. The only normative framework that has weight, by his lights, are the mortality and morbidity of disease. Of course mortality and morbidity are exceptionally important. But if that is the only framework that matters to Pinker he is in a very small minority.

The entire critique is here.

The Psychology of Whistleblowing

James Dungan, Adam Waytz, Liane Young
Current Opinion in Psychology


Whistleblowing—reporting another person's unethical behavior to a third party—represents an ethical quandary. In some cases whistleblowing appears heroic whereas in other cases it appears reprehensible. This article describes how the decision to blow the whistle rests on the tradeoff that people make between fairness and loyalty. When fairness increases in value, whistleblowing is more likely whereas when loyalty increases in value, whistleblowing is less likely. Furthermore, we describe systematic personal, situational, and cultural factors stemming from the fairness-loyalty tradeoff that drive whistleblowing. Finally, we describe how minimizing this tradeoff and prioritizing constructive dissent can encourage whistleblowing and strengthen collectives.

The entire article is here.

Wednesday, August 26, 2015

Dreading My Patient

By Simon Yisreal Feuerman
The New York Times - Opinionator
Originally published August 25, 2015

I didn’t want him to show up.

He was a bright, handsome and winning patient. His first three sessions had been perfectly ordinary. And yet a few minutes before his fourth session, I found myself ardently wishing for him not to come.

This feeling was puzzling. It had overtaken me suddenly.

My patient was in his late 20s and had decided to enter therapy, as he explained in his first session, because he did not have enough confidence. He talked about not being able to think for himself and make his own decisions, not being able to hold his own at work or find his way when he was around women. He found that he stammered a lot and said the “wrong” things.

The entire article is here.

The Future of Morality, at Every Internet User's Fingertips

By Tim Hwang
The Atlantic
Originally posted August 5, 2015

Here is an excerpt:

The choice not to link is therefore a personal moral act: It invokes an individual responsibility around making content accessible to others online. The economics of advertising are such that linking provides a frictionless channel for an audience’s attention (read: money) to reach content. The web of information stitched together by an individual as they browse and publish across the Internet is also implicitly a web of support for the content being linked to.

This shuffles up our traditional notions of what it means to link. Linking is tangled up with our concepts of proof and good argumentation online. One links to something else in order to provide a citation that backs up a point—that’s how I’m using links in this very article, for instance. The often-heard call of “citation needed” on Wikipedia echoes much of the same functionality.

Choosing not to link in that context represents a belief that the ethical duties around linking will sometimes outweigh the need to use linking to facilitate discourse and debate online. In some ways, it implies that the latter use is the lesser necessary as the ability to find information has grown enormously from the early days of the web.

The entire article is here.

Note: The article provides examples of why linking may be rewarding inappropriate, unethical, or immoral acts by internet sites and authors.

Tuesday, August 25, 2015

The Lion, the Myth, and the Morality Tale

By Brandon Ferdig
The American Thinker
Originally posted August 8, 2015

Here is an excerpt:

There’s nothing inherently wrong with myth and symbolism. They are emotional-mental tools used to categorize our world, to seek its improvement, to add meaning, to sink our emotional teeth into life and cultivate richness around our experience. Epic is awesome.

It was awesome for those who cried when seeing Barack Obama elected because of the interpreted representative step forward and victory of our nation. It’s awesome to feel moved by the sight of an animal that represents and elicits majesty. And it’s awesome to find other like-minded folks and bond in celebration or fight for a better world.

But there’s a risk.

To the degree that we subscribe to a particular ideology is the potential for us to color the events of our world with its tint. Suddenly we have something invested into these events -- our world view, our ego -- and exaggerated responses result. We’ll fight to defend our ideology, details and facts be damned. Get with like-minded folks, and you can create a mob.

The entire article is here.

The curious tale of Julie and Mark: Unraveling the moral dumbfounding effect

Edward B. Royzman, Kwanwoo Kim, Robert F. Leeman
Judgment and Decision Making, Vol. 10, No. 4, July 2015, pp. 296–313


The paper critically reexamines the well-known “Julie and Mark” vignette, a stylized account of two college-age siblings opting to engage in protected sex while vacationing abroad (e.g., Haidt, 2001). Since its inception, the story has been viewed as a rhetorically powerful validation of Hume’s “sentimentalist” dictum that moral judgments are not rationally deduced but arise directly from feelings of pleasure or displeasure (e.g., disgust). People’s typical reactions to the vignette are alleged to support this view by demonstrating that individuals are prone to become morally dumbfounded (Haidt, 2001; Haidt, Bjorklund, & Murphy, 2000), i.e., they tend to “stubbornly” maintain their disapproval of the act without supporting reasons. In what follows, we critically reassess the traditional account, predicated on the notion that, among other things, most subjects simply fail to be convinced that the siblings’ actions are truly harm-free, thus having excellent reasons to disapprove of these acts. In line with this critique, 3 studies found that subjects 1) tended not to believe that the siblings’ actions were in fact harmless; 2) notwithstanding that, and in spite of holding a number of “counterargument-immune” reasons, subjects could be effectively maneuvered into exhibiting all the trademark signs of a morally dumbfounded state (which they subsequently recanted), and 3) with subjects’ beliefs about harm and standards of normative evaluation properly factored in, a more rigorous assessment procedure yielded a dumbfounding estimate of about 0. Based on these and related results, we contend that subjects’ reactions are wholly in line with the rationalist model of moral judgment and that their use in support of claims of moral arationalism should be reevaluated.

The entire article is here.

Monday, August 24, 2015

Good Without Knowing it: Subtle Contextual Cues can Activate Moral Identity and Reshape Moral Intuition

Keith Leavitt, Lei Zhu, Karl Aquino
Journal of Business Ethics
July 2015  Date: 30 Jul 2015


The role of moral intuition (i.e., a set of implicit processes which occur automatically and at the fringe of conscious awareness) has been increasingly implicated in business decisions and (un)ethical business behavior. But troublingly, because implicit processes often operate outside of conscious awareness, decision makers are generally unaware of their influence. We tested whether subtle contextual cues for identity can alter implicit beliefs. In two studies, we found that contextual cues which nonconsciously prime moral identity weaken the implicit association between the categories of “business” and “ethical,” an implicit association which has previously been linked to unethical decision making. Further, changes in this implicit association mediated the relationship between contextually primed moral identity and concern for external stakeholder groups, regardless of self-reported moral identity. Thus, our results show that subtle contextual cues can lead individuals to render more ethical judgments, by automatically restructuring moral intuition below the level of consciousness.

The entire article is here.

Detox or lose your benefits

New welfare proposals are based on bad evidence and worse ethics

Ian Hamilton
The Conversation
Originally posted August 3, 2015

When is a choice not really a choice? It could be argued that the latest proposal from the government aimed at people who have problems with drugs and alcohol is not a choice but an ultimatum – accept help for your problem or lose your right to welfare benefits.

This proposal raises some very serious issues. Treating any condition is based on consent – the person should be willing to have the treatment. In this case, people have little choice and therefore they would probably be consenting to treatment to avoid losing money. This also passes on an ethical dilemma to treatment staff, who would need to decide if they are willing to participate in state-sponsored coercion.

Sunday, August 23, 2015

Psychologist's Work For GCHQ Deception Unit Inflames Debate Among Peers

By Andrew Fishman
The Intercept
Originally posted August 7, 2015

A British psychologist is receiving sharp criticism from some professional peers for providing expert advice to help the U.K. surveillance agency GCHQ manipulate people online.

The debate brings into focus the question of how or whether psychologists should offer their expertise to spy agencies engaged in deception and propaganda.

Dr. Mandeep K. Dhami, in a 2011 paper, provided the controversial GCHQ spy unit JTRIG with advice, research pointers, training recommendations, and thoughts on psychological issues, with the goal of improving the unit’s performance and effectiveness. JTRIG’s operations have been referred to as “dirty tricks,” and Dhami’s paper notes that the unit’s own staff characterize their work using “terms such as ‘discredit,’ promote ‘distrust,’ ‘dissuade,’ ‘deceive,’ ‘disrupt,’ ‘delay,’ ‘deny,’ ‘denigrate/degrade,’ and ‘deter.’” The unit’s targets go beyond terrorists and foreign militaries and include groups considered “domestic extremist[s],” criminals, online “hacktivists,” and even “entire countries.”

The entire article is here.

Saturday, August 22, 2015

A Quantitative Analysis of Undisclosed Conflicts of Interest in Pharmacology Textbooks

Piper BJ, Telku HM, Lambert DA (2015) A Quantitative Analysis of Undisclosed Conflicts of Interest in Pharmacology Textbooks. PLoS ONE 10(7): e0133261. doi:10.1371/journal.pone.0133261



Disclosure of potential conflicts of interest (CoI) is a standard practice for many biomedical journals but not for educational materials. The goal of this investigation was to determine whether the authors of pharmacology textbooks have undisclosed financial CoIs and to identify author characteristics associated with CoIs.

Methods and Findings

The presence of potential CoIs was evaluated by submitting author names (N = 403; 36.3% female) to a patent database (Google Scholar) as well as a database that reports on the compensation ($USD) received from 15 pharmaceutical companies (ProPublica’s Dollars for Docs). All publications (N = 410) of the ten highest compensated authors from 2009 to 2013 and indexed in Pubmed were also examined for disclosure of additional companies that the authors received research support, consulted, or served on speaker’s bureaus. A total of 134 patents had been awarded (Maximum = 18/author) to textbook authors. Relative to DiPiro’s Pharmacotherapy: A Pathophysiologic Approach, contributors to Goodman and Gilman’s Pharmacological Basis of Therapeutics and Katzung’s Basic and Clinical Pharmacology were more frequently patent holders (OR = 6.45, P < .0005). Female authors were less likely than males to have > 1 patent (OR = 0.15, P < .0005). A total of $2,411,080 USD (28.3% for speaking, 27.0% for consulting, and 23.9% for research), was received by 53 authors (Range = $299 to $310,000/author). Highly compensated authors were from multiple fields including oncology, psychiatry, neurology, and urology. The maximum number of additional companies, not currently indexed in the Dollars for Docs database, for which an author had potential CoIs was 73.


Financial CoIs are common among the authors of pharmacology and pharmacotherapy textbooks. Full transparency of potential CoIs, particularly patents, should become standard procedure for future editions of educational materials in pharmacology.

The entire article is here.

Friday, August 21, 2015

How medical students learn ethics: an online log of their learning experiences

Carolyn Johnston & Jonathan Mok
J Med Ethics doi:10.1136/medethics-2015-102716


Medical students experience ethics learning in a wide variety of formats, delivered not just through the taught curriculum. An audit of ethics learning was carried out at a medical school through a secure website over one academic year to determine the quantity and range of medical ethics learning in the undergraduate curriculum and compare this with topics for teaching described by the Institute of Medical Ethics (IME) (2010) and the General Medical Council's (GMC) Tomorrow's Doctors (2009). The online audit captured the participants’ reflections on their learning experiences and the impact on their future practice. Results illustrate the opportunistic nature of ethics learning, especially in the clinical years, and highlight the reality of the hidden curriculum for medical students. Overall, the ethics learning was a helpful and positive experience for the participants and fulfils the GMC and IME curriculum requirements.

The entire article is here.

How do Medical Students Learn Ethics?

Guest Post by Carolyn Johnston
BMJ Blogs
Originally posted on August 3, 2015

How interested are medical students in learning ethics and law? I have met students who have a genuine interest in the issues, who are engaged in teaching sessions and may go on to intercalate in ethics and law. On the other hand some consider that ethics is “just common sense”. They want to know only the legal parameters within which they will go on to practice and do not want to be troubled with a discussion of ethical issues for which there may not be a “correct” answer.

Ethics and law is a core part of the undergraduate medical curriculum and so in order to engage students successfully I need to know whether my teaching materials are relevant, useful and interesting. In 2010 I ran a student selected component in which MBBS Year 2 students created materials for medical ethics and law topics for pre-clinical students which they considered were engaging and relevant, so that students might go further than learning merely to pass exams. One student, Marcus Sorensen, who had managed a design consultancy focusing on web design and development before starting his medical studies, came up with the idea of a website as a platform for ethics materials for King’s students and he created the website http://get-ethical.co.uk.

The entire article is here.

Thursday, August 20, 2015

Algorithms and Bias: Q. and A. With Cynthia Dwork

By Claire Cane Miller
The New York Times - The Upshot
Originally posted August 10, 2015

Here is an excerpt:

Q: Some people have argued that algorithms eliminate discrimination because they make decisions based on data, free of human bias. Others say algorithms reflect and perpetuate human biases. What do you think?

A: Algorithms do not automatically eliminate bias. Suppose a university, with admission and rejection records dating back for decades and faced with growing numbers of applicants, decides to use a machine learning algorithm that, using the historical records, identifies candidates who are more likely to be admitted. Historical biases in the training data will be learned by the algorithm, and past discrimination will lead to future discrimination.

The entire article is here.

life after faith

Richard Marshall interviews Philip Kitcher
3:AM Magazine
Originally published on August 2, 2015

Here is an excerpt:

Thought experiments work when, and only when, they call into action cognitive capacities that might reliably deliver the conclusions drawn. When the question posed is imprecise, your thought experiment is typically useless. But even more crucial is the fact that the stripped-down scenarios many philosophers love simply don’t mesh with our intellectual skills. The story rules out by fiat the kinds of reactions we naturally have in the situation described. Think of the trolley problem in which you are asked to decide whether to push the fat man off the bridge. If you imagine yourself – seriously imagine yourself – in the situation, you’d look around for alternatives, you’d consider talking to the fat man, volunteering to jump with him, etc. etc. None of that is allowed. So you’re offered a forced choice about which most people I know are profoundly uneasy. The “data” delivered are just the poor quality evidence any reputable investigator would worry about using. (I like Joshua Greene’s fundamental idea of investigating people’s reactions; but I do wish he’d present them with better questions.)

Philosophers love to appeal to their “intuitions” about these puzzle cases. They seem to think they have access to little nuggets of wisdom. We’d all be much better off if the phrase “My intuition is …” were replaced by “Given my evolved psychological adaptations and my distinctive enculturation, when faced by this perplexing scenario, I find myself, more or less tentatively, inclined to say …” Maybe there are occasions in which the cases bring out some previously unnoticed facet of the meaning of a word. But, for a pragmatist like me, the important issues concern the words we might deploy to achieve our purposes, rather than the language we actually use.

If the intuition-mongering were abandoned, would that be the end of philosophy? It would be the end of a certain style of philosophy – a style that has cut philosophy off, not only from the humanities but from every other branch of inquiry and culture. (In my view, most of current Anglophone philosophy is quite reasonably seen as an ingrown conversation pursued by very intelligent people with very strange interests.) But it would hardly stop the kinds of investigation that the giants of the past engaged in. In my view, we ought to replace the notion of analytic philosophy by that of synthetic philosophy. Philosophers ought to aspire to know lots of different things and to forge useful synthetic perspectives.

The entire interview is here.

Wednesday, August 19, 2015

Fetal Tissue Fallout

R. Alta Charo
The New England Journal of Medicine
August 12, 201
DOI: 10.1056/NEJMp1510279

We have a duty to use fetal tissue for research and therapy.

This statement might seem extreme in light of recent events that have reopened a seemingly long-settled debate over whether such research ought even be permitted, let alone funded by the government. Morality and conscience have been cited to justify defunding, and even criminalizing, the research, just as morality and conscience have been cited to justify not only health care professionals' refusal to provide certain legal medical services to their patients but even their obstruction of others' fulfillment of that duty.

But this duty of care should, I believe, be at the heart of the current storm of debate surrounding fetal tissue research, an outgrowth of the ongoing effort to defund Planned Parenthood. And that duty includes taking advantage of avenues of hope for current and future patients, particularly if those avenues are being threatened by a purely political fight — one that, in this case, will in no way actually affect the number of fetuses that are aborted or brought to term, the alleged goal of the activists involved.

The entire article is here.

Empathy, Justice, and Moral Behavior

By Jean Decety and Jason M. Cowell
AJOB Neuroscience, 6(3): 3–14, 2015

Empathy shapes the landscape of our social lives. It motivates prosocial and caregiving behaviors, plays a role in inhibiting aggression, and facilitates cooperation between members of a similar social group. Thus, empathy is often conceived as a driving motivation of moral behavior and justice, and, as such, everyone would think that it should be cultivated. However, the relationships between empathy, morality, and justice are complex. We begin by explaining what the notion of empathyencompasses and then argue how sensitivity to others’ needs has evolved in the context of parental care and group living. Next,we examine the multiple physiological, hormonal, and neural systems supporting empathy and its functions. One troubling but important corollary of this neuro-evolutionary model is that empathy produces social preferences that can conflict with fairness and justice. An understanding of the factors that mold our emotional response and caring motivation for others helps provide organizational principles and ultimately guides decision making in medical ethics.

The entire article is here.

Tuesday, August 18, 2015

What Emotions Are (and Aren’t)

By Lisa Feldman Barrett
The New York Times
Originally published July 31, 2015

Here is an excerpt:

Brain regions like the amygdala are certainly important to emotion, but they are neither necessary nor sufficient for it. In general, the workings of the brain are not one-to-one, whereby a given region has a distinct psychological purpose. Instead, a single brain area like the amygdala participates in many different mental events, and many different brain areas are capable of producing the same outcome. Emotions like fear and anger, my lab has found, are constructed by multipurpose brain networks that work together.

If emotions are not distinct neural entities, perhaps they have a distinct bodily pattern — heart rate, respiration, perspiration, temperature and so on?

Again, the answer is no.

The entire article is here.

Not Just Empathy: Meaning of Life TV

Robert Wright and Paul Bloom
Meaning of Life.tv

Bob and Paul discuss empathy, compassion, values, moral development, beliefs, in-group/out-group biases, and evolutionary psychology.  There are some pithy remarks and humorous lines, but a great deal of research and wisdom in this 42 minutes video.  It is truly video worth watching.

Monday, August 17, 2015

Hormones and Ethics: Understanding the Biological Basis of Unethical Conduct.

Lee, Jooa Julie, Francesca Gino, Ellie Shuo Jin, Leslie K. Rice, and Robert A. Josephs.
Journal of Experimental Psychology: General (in press).


Globally, fraud has been rising sharply over the last decade, with current estimates placing financial losses at greater than $3.7 trillion dollars annually. Unfortunately, fraud prevention has been stymied by lack of a clear and comprehensive understanding of its underlying causes and mechanisms. In this paper, we focus on an important but neglected topic—the biological antecedents and consequences of unethical conduct—using salivary collection of hormones (testosterone and cortisol). We hypothesized that pre-performance cortisol would interact with pre-performance levels of testosterone to regulate cheating behavior in two studies. Further, based on the previously untested cheating-as-stress-reduction hypothesis, we predicted a dose-response relationship between cheating and reductions in cortisol and negative affect. Taken together, this research marks the first foray into the possibility that endocrine system activity plays an important role in the regulation of unethical behavior.

The entire article is here.

Doctors got $84M from drug companies

By Lauryn Schroeder
The San Diego Union Tribune
Originally published July 29, 2015

Doctors in San Diego County received $84 million in payouts from drug and medical device companies last year, according to federal data.

Health professionals received payments for services such as consulting, promotional speaking and research, as well as gifts in the form of meals and entertainment, according to a review of federal data by The San Diego Union-Tribune. More than 107,000 transactions were documented.

The information is being gathered and disclosed as part of a federal effort to bring more transparency to relationships that could lead to conflicts of interest, if doctors take money or gifts and then prescribe certain drugs.

The San Diego data was dominated by some larger transactions, such as a doctor collecting royalties on an invention, or a La Jolla couple who recently sold their medical device maker to one of the large drug companies.

The entire article is here.

Sunday, August 16, 2015

Ethical Practice in Telepsychology

By Nicholas Gamble, Christopher Boyle and Zoe A Morris
Special Issue: Telepsychology: Research and Practice
Volume 50, Issue 4, pages 292–298, August 2015


Telepsychology has the potential to revolutionise the provision of psychological service not only to those in remote locations, or with mobility issues, but also for those who prefer flexible access to services. Rapid developments in internet communications technology have yielded new and diverse methods of telepsychology. As a result, ethical regulatory and advisory guidelines for practice have often been developed and disseminated reactively. This article investigates how the core ethical principles of confidentially, consent and competence are challenged in telepsychological practice.


Through the application of existing ethical standards, advances in communications technology are considered and their ethical use in psychological contexts explored.


It is expected that psychologists will have basic competencies for the use of everyday technology in their practice. However, the use of internet communications technology for telepsychology has created new opportunities and challenges for ethical practice. For example, telepsychology is geographically flexible, but there can be privacy concerns in cross-border information flow. Psychologists who engage in telepsychology require a particularly thorough understanding of concepts such as data mining, electronic storage, and internet infrastructure. This article highlights how existing technology and communication tools both challenge and support ethical practice in telepsychology in an Australian regulatory context.

The entire article is here.

Saturday, August 15, 2015

Understanding Libertarian Morality: The Psychological Dispositions of Self-Identified Libertarians

Ravi Iyer, Spassena Koleva, Jesse Graham, Peter Ditto, Jonathan Haidt
PLOS | One
Published: August 21, 2012
DOI: 10.1371/journal.pone.0042366


Libertarians are an increasingly prominent ideological group in U.S. politics, yet they have been largely unstudied. Across 16 measures in a large web-based sample that included 11,994 self-identified libertarians, we sought to understand the moral and psychological characteristics of self-described libertarians. Based on an intuitionist view of moral judgment, we focused on the underlying affective and cognitive dispositions that accompany this unique worldview. Compared to self-identified liberals and conservatives, libertarians showed 1) stronger endorsement of individual liberty as their foremost guiding principle, and weaker endorsement of all other moral principles; 2) a relatively cerebral as opposed to emotional cognitive style; and 3) lower interdependence and social relatedness. As predicted by intuitionist theories concerning the origins of moral reasoning, libertarian values showed convergent relationships with libertarian emotional dispositions and social preferences. Our findings add to a growing recognition of the role of personality differences in the organization of political attitudes.

The entire article is here.

Friday, August 14, 2015

Distributed Morality in an Information Society

Luciano Floridi
Sci Eng Ethics (2013) 19:727–743
DOI 10.1007/s11948-012-9413-4


The phenomenon of distributed knowledge is well-known in epistemic logic. In this paper, a similar phenomenon in ethics, somewhat neglected so far, is investigated, namely distributed morality. The article explains the nature of distributed morality, as a feature of moral agency, and explores the implications of its occurrence in advanced information societies. In the course of the analysis, the concept of infraethics is introduced, in order to refer to the ensemble of moral enablers, which, although morally neutral per se, can significantly facilitate or hinder both positive and negative moral behaviours.

Here is an excerpt from the conclusion:

The conclusion is that an information society is a better society if it can implement an array of moral enablers, an infraethics that is, that can support and facilitate the right sort of DM, while preventing the occurrence and strengthening of moral hinderers. Agents (including, most importantly, the State) are better agents insofar as they not only take advantage of, but also foster the right kind of moral facilitation properly geared to the right kind of distributed morality. It is a complicated scenario, but refusing to acknowledge it will not make it go away.

The entire paper is here.

What explains the rise of humans?

Yuval Noah Harari
Ted Talk
Posted on June 20, 2015

Seventy thousand years ago, our human ancestors were insignificant animals, just minding their own business in a corner of Africa with all the other animals. But now, few would disagree that humans dominate planet Earth; we've spread to every continent, and our actions determine the fate of other animals (and possibly Earth itself). How did we get from there to here? Historian Yuval Noah Harari suggests a surprising reason for the rise of humanity.

Thursday, August 13, 2015

Meeting the Challenge of Change

By Ken Pope
Excerpted from Ethics in Psychotherapy and Counseling: A Practical Guide, 5th Ed. 
Forthcoming January 2016.

Here is an excerpt:

When complicity with torture, violations of human rights, misleading the public, and other vital matters are at stake, organizations must address not only personnel, policies, and procedures but also the powerful incentives from inside and outside the organization, sources of institutional resistance to change, conflicting ethical and political values within the organization, and issues of institutional character and culture that allowed the problems to flourish for years, protected by APA's denials.

Organizations facing ethical scandals often publicly commit to admirable values such as accountability, transparency, openness to criticism, strict enforcement of ethical standards, and so on. These institutional commitments so often meet the same fate as our own individual promises to a program of personal change. We make a firm New Year's resolution to lead a healthier life. We pour time, energy, and sometimes money into making sure the change happens. We buy jogging shoes and a cookbook of healthy meals. We take out a gym membership. We discuss endlessly what approaches yield the best results. We commit to eating only healthy foods and to getting up five days a week at 5 a.m. for an hour of stretching, aerobics, and resistance exercises. But one, two, and three months later, the commitment to change that had taken such fierce hold of us and promised such wanted, needed, and carefully planned improvement has loosened or lost its grip.

The entire article is here.

Neuroscience of free will: Does reaching for beer with robotic arm mean free will doesn’t exist?

By Andrew Porterfield
Genetic Literacy Project
Originally published on July 21, 2015

Here is an excerpt:

But researchers at CalTech tried something different with him. Instead of hooking up electrodes to the motor cortex, they choose another site in the brain: The posterior parietal cortex is another part of the brain that controls actions like limb movements, but it does so on a far more sophisticated level than the motor cortex. The posterior parietal is involved in the planning of movements, and much of this planning is unconscious. So, when the CalTech team implanted electrodes in Soto’s posterior parietal, they found that they could predict movements before he actually made them. And once the brain signals doing the predicting were known, they could be used to smoothly move his limbs. Essentially, the electrode was helping him unconsciously decide to move his arms, hands and fingers. Which made beer drinking all the easier.

The fact that scientists can chart the brain’s behavior has led many to revisit an old argument over the existence of free will. If we can predict a person’s intentions just by picking up brain signals (and it took a computer two years to predict Sorto’s), then how free are our minds? How many decisions that we make every day are truly under our conscious control? Is there really free will?

The entire article is here.

Wednesday, August 12, 2015

Conflicts of Interest on Institutional Review Boards Remain Problematic

By Ed Silverman
The Wall Street Journal
Originally posted July 14, 2015

Here is an excerpt:

Well, a new study in JAMA Internal Medicine finds there is “significant progress” among IRB members in reporting and managing conflicts of interest when compared with the results of a similar study conducted in 2005. Still, the study authors, who queried 493 IRB members at 100 medical schools and 15 hospitals that received the most funding from NIH in 2012, say that problems remain.

First, though, here is the good news: There was a drop in the percentage of IRB members with conflicts – 30.4% last year compared with 39% in 2005, although this was not deemed to be a significant change. And those who were willing to report a conflict jumped to 80% from 55%. And 68% of IRB members with a conflict said they would leave the room when a protocol was discussed, compared with 38% in 2005.

The entire story is here.

Thoughts on Psychologists, Ethics, and the Use of Torture in Interrogations

Zimbardo, P.G. (2007). Thoughts on Psychologists, Ethics, and the Use of Torture in  Interrogations: Don’t Ignore Varying Roles and Complexities.
Analyses of Social Issues and Public Policy (ASAP) Online SSPSI Journal. Vol. 7, pp. 65-73.

Here is an excerpt:

Such considerations lead me to conclude that PENS has utilized the wrong model for its ethical deliberations about psychologists as consultants to military interrogations. The model featured in this task force report is that of a psychologist working for the military as an independent contractor, making rational moral decisions within a transparent setting, with full power to confront, challenge and expose unethical practices. It is left up to that individual to be alert, informed, perceptive, wise, and ready to act on principle when ethical dilemmas arise.

Instead, I will argue that those psychologists are "hired hands" working at the discretion of their military or government agency clients for as long as they provide valued service, which in the current war on terrorism is to assist by providing whatever information and advice is requested to gain "actionable intelligence" from those interrogated. PENS notes that psychologists often are part of a group of professionals, rarely acting alone. They can become part of an operational team, experiencing normative pressures to conform to the emerging standards of that group. They cannot make readily informed ethical decisions because they do not have full knowledge of how their personal contributions are being used in secret or classified missions. Their judgments and decisions may be made under conditions of uncertainty, and may include high stress. Moreover, definitions of basic terms are not constant, but shifting, so it becomes difficult or impossible to make a fully informed ethical judgment about any specific aspect of one's functions.

In addition, PENS does not recognize the reality that in field settings, the work of Ph.D./Psy.D. psychologists is often substituted by, or made operational by, numerous paraprofessionals, such as mental health counselors, personnel officers, psychological assistants and interns, and others trained in psychology. If they do not belong to professional associations, such as APA, they are relieved of the professional consequences of engaging in unethical actions. Thus, our concerns must extend to these psychologist paraprofessionals as well as those professionals within APA.

The entire article is here.

Tuesday, August 11, 2015

Do we still need to study the death penalty?

By Ryan J. Winter
The Monitor on Psychology
2015, Vol 46, No. 7
Print version: page 32

Recent Gallup polling shows support for the death penalty in the United States is at a 40-year low, with the 63 percent favorability rating a stark contrast to the 80 percent who supported it in the 1990s.1 When comparing death to life in prison, death favorability drops to 42 percent.2 Meanwhile, the number of death verdicts has also dropped, with only 73 defendants sentenced to death and 35 executed in 2014. Contrast this with 279 sentences and 98 executions in 1999.3 Of 32 death penalty states, only seven carried out executions in 2014, the fewest in 25 years. Further, eight states have abolished the death penalty since 2007, and no states have added the penalty.

As its reign appears to be over, there's no need to continue studying the death penalty, right?
Not so fast. Focusing on the malicious actions of Boston Marathon bomber Dzhokhar Tsarnaev, prosecutors and defense attorneys seemingly set aside all pretenses about his guilt to focus on the only trial phase worth attention: whether he deserved death. Half a country away, James Holmes — the Aurora, Colorado, movie theater shooter — began his trial with a prosecution equally zealous in pursuing death.

The entire article is here.

Free Will Skepticism and Criminal Behavior: A Public Health-Quarantine Model

Gregg D. Caruso
[Draft 6/11/2015]
[Cite final version: Southwest Philosophy Review 2016, 32 (1)]

One of the most frequently voiced criticisms of free will skepticism is that it is unable to
adequately deal with criminal behavior and that the responses it would permit as justified are
insufficient for acceptable social policy. This concern is fueled by two factors. The first is that
one of the most prominent justifications for punishing criminals, retributivism, is incompatible
with free will skepticism. The second concern is that alternative justifications that are not ruled
out by the skeptical view per se face significant independent moral objections (Pereboom 2014,
153). Yet despite these concerns, I maintain that free will skepticism leaves intact other ways to
respond to criminal behavior—in particular preventive detention, rehabilitation, and alteration of
relevant social conditions—and that these methods are both morally justifiable and sufficient for
good social policy. The position I defend is similar to Derk Pereboom’s (2001, 2013, 2014),
taking as its starting point his quarantine analogy, but it sets out to develop the quarantine model
within a broader justificatory framework drawn from public health ethics. The resulting model—
which I call the public health-quarantine model—provides a framework for justifying quarantine
and criminal sanctions that is more humane than retributivism and preferable to other nonretributive
alternatives. It also provides a broader approach to criminal behavior than Pereboom’s
quarantine analogy does on its own.

The entire paper is here.

Monday, August 10, 2015

The dawn of artificial intelligence

Powerful computers will reshape humanity’s future. How to ensure the promise outweighs the perils

The Economist
Originally published May 9, 2015

“The development of full artificial intelligence could spell the end of the human race,” Stephen Hawking warns. Elon Musk fears that the development of artificial intelligence, or AI, may be the biggest existential threat humanity faces. Bill Gates urges people to beware of it.

Dread that the abominations people create will become their masters, or their executioners, is hardly new. But voiced by a renowned cosmologist, a Silicon Valley entrepreneur and the founder of Microsoft—hardly Luddites—and set against the vast investment in AI by big firms like Google and Microsoft, such fears have taken on new weight. With supercomputers in every pocket and robots looking down on every battlefield, just dismissing them as science fiction seems like self-deception. The question is how to worry wisely.

The entire article is here.

The Kindness Cure

By David Desteno
The Atlantic
Originally published July 21, 2015

Here is an excerpt:

Since acting compassionately usually means putting others’ needs ahead of your own, prompting yourself to act with kindness often requires not only vigilance but a bit of willpower. That’s not to say that relying on religious or philosophical guidance to prompt kindness won’t work at times. It will. But any method that depends on constant redirection of selfish urges and top-down monitoring of one’s moral code is apt to fail. Perhaps cultivating compassion situationally—so that it automatically emerges at the sight of others in need—would be more foolproof. As a psychologist interested in moral behavior, I have long wondered if there might be a way to develop precisely this sort of reflexive compassion.

As it turns out, I didn’t have to look too far; a means was hiding in plain sight. Mindfulness meditation involves guided contemplation as a way to focus the mind. It commonly entails sitting in a quiet space for periods ranging from 20 minutes to an hour (depending on your level of advancement) and learning to guide awareness to the current moment rather than dwell upon what has been or is yet to come.

The entire article is here.

Sunday, August 9, 2015

What, exactly, does yesterday’s APA resolution prohibit?

By Marty Lederman
Just Security
Originally posted August 8, 2015

By an overwhelming vote of 156-1 (with seven abstentions and one recusal)–so lopsided that it stunned even its proponents–the American Psychological Association’s Council of Representatives yesterday approved a resolution that the APA describes as “prohibit[ing] psychologists from participating in national security interrogations.”

What does Approved Resolution No. 23B do, exactly?  As I read it, it does three principal things, in ascending order of importance:

1.  It reaffirms an existing APA ethical prohibition that psychologists “may not engage directly or indirectly in any act of torture or cruel, inhuman, or degrading treatment or punishment,” a prohibition that “applies to all persons (including foreign detainees) wherever they may be held”; and it “clarifies” that “cruel, inhuman, or degrading treatment or punishment” (CIDTP) should be understood not (or not only) as that term is defined in the U.S. Senate’s understandings of, and reservations to, the Convention Against Torture, but instead in accord with the broadest understanding of CIDTP adopted by any international legal body at the relevant time:  the definition “continues to evolve with international legal understandings of this term.”


3.  Finally, and most significantly, the Resolution establishes a new prohibition that “psychologists shall not conduct, supervise, be in the presence of, or otherwise assist any national security interrogations for any military or intelligence entities, including private contractors working on their behalf, nor advise on conditions of confinement insofar as these might facilitate such an interrogation.”

The entire article is here.

Fifty Shades of Manipulation

Cass R. Sunstein
Journal of Behavioral Marketing, Forthcoming
February 18, 2015


A statement or action can be said to be manipulative if it does not sufficiently engage or appeal to people’s capacity for reflective and deliberative choice. One problem with manipulation, thus understood, is that it fails to respect people’s autonomy and is an affront to their dignity. Another problem is that if they are products of manipulation, people’s choices might fail to promote their own welfare, and might instead promote the welfare of the manipulator. To that extent, the central objection to manipulation is rooted in a version of Mill’s Harm Principle: People know what is in their best interests and should have a (manipulation-free) opportunity to make that decision. On welfarist grounds, the norm against manipulation can be seen as a kind of heuristic, one that generally works well, but that can also lead to serious errors, at least when the manipulator is both informed and genuinely interested in the welfare of the chooser.

For the legal system, a pervasive puzzle is why manipulation is rarely policed. The simplest answer is that manipulation has so many shades, and in a social order that values free markets and is committed to freedom of expression, it is exceptionally difficult to regulate manipulation as such. But as the manipulator’s motives become more self-interested or venal, and as efforts to bypass people’s deliberative capacities becomes more successful, the ethical objections to manipulation become very forceful, and the argument for a legal response is fortified. The analysis of manipulation bears on emerging first amendment issues raised by compelled speech, especially in the context of graphic health warnings. Importantly, it can also help orient the regulation of financial products, where manipulation of consumer choices is an evident but rarely explicit concern.

The entire article is here.

Saturday, August 8, 2015

Why Ethics Codes Fail

By Laura Stark
Inside Higher Ed
Originally published July 21, 2015

Last week, an independent investigation of the American Psychological Association found that several of its leaders aided the U.S. Department of Defense’s controversial enhanced interrogation program by loosing constraints on military psychologists. It was another bombshell in the ongoing saga of the U.S. war on terror in which psychologists have long served as foot soldiers. Now, it appears, psychologists were among its instigators, too.

Leaders of the APA used the profession’s ethics policy to promote unethical activity, rather than to curb it. How? Between 2000 and 2008, APA leaders changed their ethics policy to match the unethical activities that some psychologists wanted to carry out -- and thus make potential torture appear ethical. “The evidence supports the conclusion that APA officials colluded with DoD officials to, at the least, adopt and maintain APA ethics policies that were not more restrictive than the guidelines that key DoD officials wanted,” the investigation found, “and that were as closely aligned as possible with DoD policies, guidelines, practices or preferences, as articulated to APA by these DoD officials.” Among the main culprits was the APA’s own ethics director.

The entire article is here.

Friday, August 7, 2015

Psychologists Approve Ban on Role in National Security Interrogations

By James Risen
The New York Times
Originally posted August 7, 2015

The American Psychological Association on Friday overwhelmingly approved a new ban on any involvement by psychologists in national security interrogations conducted by the United States government, even noncoercive interrogations now conducted by the Obama administration.

The council of representatives of the organization, the nation’s largest professional association of psychologists, voted to impose the ban at its annual meeting here.

The vote followed an emotional debate in which several members said the ban was needed to restore the organization’s reputation in the wake of a scathing independent investigation ordered by the A.P.A.’s board.

The entire article is here.

The science and morality of climate change

By Amanda D. Rodewald
The Hill
Originally published July 21, 2015

Here is an excerpt:

Recently, however, there has been a shift in the conversation from largely scientific and technical grounds to morality and ethics. Last month, Pope Francis released an encyclical — a formal statement of the Vatican's views on an issue — that highlights the impacts that climate change will have on humanity, especially poor and vulnerable populations. In his statement, Francis warned that human activities are changing the climate, chastised "obstructionists" for blocking action, and called for global leaders — and each one of us — to meet our "moral obligation" to fight it.

The entire article is here.

How Evolution Illuminates the Human Condition

The Wright Show - Meaning TV
Robert Wright and David Sloan Wilson
Originally posted July 19, 2015

Robert Wright and David Sloan Wilson discuss evolution, biology, psychology, religion, culture, science, values, beliefs, meaning, altruism, motivation, groupishness, and group strength.

Thursday, August 6, 2015

When Knowledge Knows No Bounds

By Stav Atir, Emily Rosenzweig, and David Dunning
When Knowledge Knows No Bounds
Psychological Science, first published on July 14, 2015


People overestimate their knowledge, at times claiming knowledge of concepts, events, and people that do not exist and cannot be known, a phenomenon called overclaiming. What underlies assertions of such impossible knowledge? We found that people overclaim to the extent that they perceive their personal expertise favorably. Studies 1a and 1b showed that self-perceived financial knowledge positively predicts claiming knowledge of nonexistent financial concepts, independent of actual knowledge. Study 2 demonstrated that self-perceived knowledge within specific domains (e.g., biology) is associated specifically with overclaiming within those domains. In Study 3, warning participants that some of the concepts they saw were fictitious did not reduce the relationship between self-perceived knowledge and overclaiming, which suggests that this relationship is not driven by impression management. In Study 4, boosting self-perceived expertise in geography prompted assertions of familiarity with nonexistent places, which supports a causal role for self-perceived expertise in claiming impossible knowledge.

The entire article is here.

The causal cognition of wrong doing: incest, intentionality, and morality

Rita Astuti and Maurice Bloch
Front. Psychol., 18 February 2015


The paper concerns the role of intentionality in reasoning about wrong doing. Anthropologists have claimed that, in certain non-Western societies, people ignore whether an act of wrong doing is committed intentionally or accidentally. To examine this proposition, we look at the case of Madagascar. We start by analyzing how Malagasy people respond to incest, and we find that in this case they do not seem to take intentionality into account: catastrophic consequences follow even if those who commit incest are not aware that they are related as kin; punishment befalls on innocent people; and the whole community is responsible for repairing the damage. However, by looking at how people reason about other types of wrong doing, we show that the role of intentionality is well understood, and that in fact this is so even in the case of incest. We therefore argue that, when people contemplate incest and its consequences, they simultaneously consider two quite different issues: the issue of intentionality and blame, and the much more troubling and dumbfounding issue of what society would be like if incest were to be permitted. This entails such a fundamental attack on kinship and on the very basis of society that issues of intentionality and blame become irrelevant. Using the insights we derive from this Malagasy case study, we re-examine the results of Haidt’s psychological experiment on moral dumbfoundedness, which uses a story about incest between siblings as one of its test scenarios. We suggest that the dumbfoundedness that was documented among North American students may be explained by the same kind of complexity that we found in Madagascar. In light of this, we discuss the methodological limitations of experimental protocols, which are unable to grasp multiple levels of response. We also note the limitations of anthropological methods and the benefits of closer cross-disciplinary collaboration.

The entire article is here.

Wednesday, August 5, 2015

What would I eliminate if I had a magic wand? Overconfidence’

The psychologist and bestselling author of Thinking, Fast and Slow reveals his new research and talks about prejudice, fleeing the Nazis, and how to hold an effective meeting

By David Shariatmadari
The Guardian
Originally posted on July 18, 2015

Here is an excerpt:

What’s fascinating is that Kahneman’s work explicitly swims against the current of human thought. Not even he believes that the various flaws that bedevil decision-making can be successfully corrected. The most damaging of these is overconfidence: the kind of optimism that leads governments to believe that wars are quickly winnable and capital projects will come in on budget despite statistics predicting exactly the opposite. It is the bias he says he would most like to eliminate if he had a magic wand. But it “is built so deeply into the structure of the mind that you couldn’t change it without changing many other things”.

The entire article is here.

Despite allegations, suspended priest thrives as family therapist

Caitlin McCabe
Philadelphia Inquirer
Originally posted July 16, 2015

After the Roman Catholic Diocese of Camden removed Edward Igle from active ministry in 2000 over an allegation of sex abuse, he turned to his second career: family counseling.

Licensed as a therapist since the 1980s, the suspended priest runs a South Jersey practice, counseling families and children, and teaches related classes through a Philadelphia-based center, including on how to identify and clinically treat victims of sex abuse.

In 2011, church officials told New Jersey regulators about two men who claimed that Igle abused them in the 1970s. The diocese deemed both claims credible, a spokesman said, but too late under the statute of limitations to lead to prosecution.

The state has repeatedly renewed Igle's licenses.

In interviews this month, Igle, 68, denied any misconduct. He called "inaccurate" any suggestion that the first abuse allegation forced him from ministry.

"I have never sexually abused anyone in my life," he said last week at his Vineland family and marriage counseling practice, the Center for Relational Counseling.

He said that although he counsels children, he never meets alone with them. And when he teaches professionals about sex abuse, among other topics, he said he sometimes mentions that he was once accused of abuse.

The entire article is here.

Tuesday, August 4, 2015

Killer Robots: The Soldiers that Never Sleep

By Simon Parker
Originally published July 16, 2015

Here is an excerpt:

Likewise, a fully autonomous version of the Predator drone may have to decide whether or not to fire on a house whose occupants include both enemy soldiers and civilians. How do you, as a software engineer, construct a set of rules for such a device to follow in these scenarios? Is it possible to programme a device to think for itself? For many, the simplest solution is to sidestep these questions by simply requiring any automated machine that puts human life in danger to allow a human override. This is the reason that landmines were banned by the Ottawa treaty in 1997. They were, in the most basic way imaginable, autonomous weapons that would explode whoever stepped on them.

In this context the provision of human overrides make sense. It seems obvious, for example, that pilots should have full control over a plane's autopilot system. But the 2015 Germanwings disaster, when co-pilot Andreas Lubitz deliberately crashed the plane into the French Alps, killing all 150 passengers, complicates the matter. Perhaps, in fact, no pilot should be allowed to override a computer – at least, not if it means they are able to fly a plane into a mountainside?

“There are multiple approaches to trying to develop ethical machines, and many challenges,” explains Gary Marcus, cognitive scientist at NYU and CEO and Founder of Geometric Intelligence. “We could try to pre-program everything in advance, but that’s not trivial – how for example do you program in a notion like ‘fairness’ or ‘harm’?” There is another dimension to the problem aside from ambiguous definitions. For example, any set of rules issued to an automated soldier will surely be either too abstract to be properly computable, or too specific to cover all situations.

The entire article is here.

Psychologists are known for being liberal, but why?

By Elliot Berkman
The Conversation
Originally published July 14, 2015

Is the field of social psychology biased against political conservatives? There has been intense debate about this question since an informal poll of over 1,000 attendees at a social psychology meeting in 2011 revealed the group to be overwhelmingly liberal.

Formal surveys have produced similar results, showing the ratio of liberals to conservatives in the broader field of psychology is 14-to-1.

Since then, social psychologists have tried to figure out why this imbalance exists.

The primary explanation offered is that the field has an anticonservative bias. I have no doubt that this bias exists, but it’s not strong enough to push people who lean conservative out of the field at the rate they appear to be leaving.

I believe that a less prominent explanation is more compelling: learning about social psychology can make you more liberal. I know about this possibility because it is exactly what happened to me.

The entire article is here.

Monday, August 3, 2015

Empathy Is Actually a Choice

By Daryl Cameron, Michael Inzlicht and William A. Cunningham
The New York Times - Gray Matter
Originally published July 10, 2015

ONE death is a tragedy. One million is a statistic.

You’ve probably heard this saying before. It is thought to capture an unfortunate truth about empathy: While a single crying child or injured puppy tugs at our heartstrings, large numbers of suffering people, as in epidemics, earthquakes and genocides, do not inspire a comparable reaction.

Studies have repeatedly confirmed this. It’s a troubling finding because, as recent research has demonstrated, many of us believe that if more lives are at stake, we will — and should — feel more empathy (i.e., vicariously share others’ experiences) and do more to help.

Not only does empathy seem to fail when it is needed most, but it also appears to play favorites. Recent studies have shown that our empathy is dampened or constrained when it comes to people of different races, nationalities or creeds. These results suggest that empathy is a limited resource, like a fossil fuel, which we cannot extend indefinitely or to everyone.

The entire article is here.

Cheeseburger ethics

By Eric Schwitzgebel
Aeon Magazine
Originally published July 15, 2015

Here are two excerpts:

Ethicists do not appear to behave better. Never once have we found ethicists as a whole behaving better than our comparison groups of other professors, by any of our main planned measures. But neither, overall, do they seem to behave worse. (There are some mixed results for secondary measures.) For the most part, ethicists behave no differently from professors of any other sort – logicians, chemists, historians, foreign-language instructors.


‘Furthermore,’ she continues, ‘if we demand that ethicists live according to the norms they espouse, that will put major distortive pressures on the field. An ethicist who feels obligated to live as she teaches will be motivated to avoid highly self-sacrificial conclusions, such as that the wealthy should give most of their money to charity or that we should eat only a restricted subset of foods. Disconnecting professional ethicists’ academic enquiries from their personal choices allows them to consider the arguments in a more even-handed way. If no one expects us to act in accord with our scholarly opinions, we are more likely to arrive at the moral truth.’

The entire article is here.

Sunday, August 2, 2015

Is Consciousness an Engineering Problem?

We could build an artificial brain that believes itself to be conscious. Does that mean we have solved the hard problem?

By Michael Graziano
Aeon Magazine
Originally published July 10, 2015

Here is an excerpt:

As long as scholars think of consciousness as a magic essence floating inside the brain, it won’t be very interesting to engineers. But if it’s a crucial set of information, a kind of map that allows the brain to function correctly, then engineers may want to know about it. And that brings us back to artificial intelligence. Gone are the days of waiting for computers to get so complicated that they spontaneously become conscious. And gone are the days of dismissing consciousness as an airy-fairy essence that would bring no obvious practical benefit to a computer anyway. Suddenly it becomes an incredibly useful tool for the machine.

The entire article is here.

Saturday, August 1, 2015

Dilemma 33: Breaking Bad (or Good)

Dr. Jesse Pinkman has been working with a 26-year-old professional for about a year, Ms. Skyler White. They have been working on managing her symptoms of depression and anxiety.  The patient smokes marijuana regularly, which has been a concern for Dr. Pinkman.

Skyler arrives late to her appointment, looking frazzled.  She explained her friend overdosed on heroin the prior evening.  She has been in the ER for the past 12 hours.  Her friend will likely survive, but she may have residual cognitive problems.

Skyler reported feeling horribly guilty because she introduced her friend to her next door neighbor, who is the drug dealer.  Her friend always stops by to see Skyler first, before purchasing drugs. Skyler purchases her marijuana from the same dealer.

After processing the events of the previous evening, Skyler stated she will move away from the drug dealer.  She no longer wants to be this close or indirectly cause harm to someone else.  The police are actively investigating, but Skyler does not want to divulge any information.  She does not want to get involved.  Skyler makes an appointment for next week, and then leaves feeling somewhat better.

Dr. Pinkman becomes preoccupied about what Skyler reported.  Dr. Pinkman knows the dealer’s name from previous sessions and can figure out the address of dealer, based on his patient’s address.

Dr. Pinkman is contemplating calling in an anonymous tip to the police.  Dr. Pinkman is aware of the increase in heroin use in his community.  He also recognizes his struggle with moral outrage and sense of injustice in this situation.  Struggling with the emotions to report or not report anonymously, Dr. Pinkman calls you for a consultation.

What are the competing ethical principles in this situation?

How would you feel if you were Dr. Pinkman?

What are some of the positive and negative consequences about Dr. Pinkman making the anonymous report?

How do your own professional values and personal morals influence how you would respond to Dr. Pinkman?

How would you respond to Dr. Pinkman’s moral outrage?

Would your answers differ if the friend died?

Would your answers differ if the patient was of low socio-economic status?

Would your answers differ if Skyler were a teenager?