By Rebecca Johnson, Govind Persad, and Dominic Sisti
J Am Acad Psychiatry Law 42:4:469-477 (December 2014)
Abstract
Recent events have revived questions about the circumstances that ought to trigger therapists' duty to warn or protect. There is extensive interstate variation in duty to warn or protect statutes enacted and rulings made in the wake of the California Tarasoff ruling. These duties may be codified in legislative statutes, established in common law through court rulings, or remain unspecified. Furthermore, the duty to warn or protect is not only variable between states but also has been dynamic across time. In this article, we review the implications of this variability and dynamism, focusing on three sets of questions: first, what legal and ethics-related challenges do therapists in each of the three broad categories of states (states that mandate therapists to warn or protect, states that permit therapists to breach confidentiality for warnings but have no mandate, and states that give no guidance) face in handling threats of violence? Second, what training do therapists and other professionals involved in handling violent threats receive, and is this training adequate for the task that these professionals are charged with? Third, how have recent court cases changed the scope of the duty? We conclude by pointing to gaps in the empirical and conceptual scholarship surrounding the duty to warn or protect.
The entire article can be found here.
Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care
Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Wednesday, December 31, 2014
Making sense of a court's two cents
By David DeMatteo, JD, PhD, Jaymes Fairfax-Columbo, BA, and Daniel A. Krauss, JD, PhD
The Monitor on Psychology
December 2014, Vol 45, No. 11
Print version: page 24
The Pennsylvania Supreme Court recently decided two cases that address whether parties can use expert witnesses to help juries assess lay witness testimony. In one case, Commonwealth v. Walker (2014), the court lifted a ban on the admissibility of expert testimony regarding eyewitness identification. In the other case, Commonwealth v. Alicia (2014), the court held that expert testimony regarding false confessions was inadmissible.
Although the two outcomes diverged, robust research suggests that eyewitness identification and false confessions pose significant problems for the legal system (Wells et al., 1998; Kassin et al., 2010). So, how did the court justify its differing opinions? And what lessons can be learned from these discrepant decisions concerning how social science can influence legal decisions?
The entire article is here.
The Monitor on Psychology
December 2014, Vol 45, No. 11
Print version: page 24
The Pennsylvania Supreme Court recently decided two cases that address whether parties can use expert witnesses to help juries assess lay witness testimony. In one case, Commonwealth v. Walker (2014), the court lifted a ban on the admissibility of expert testimony regarding eyewitness identification. In the other case, Commonwealth v. Alicia (2014), the court held that expert testimony regarding false confessions was inadmissible.
Although the two outcomes diverged, robust research suggests that eyewitness identification and false confessions pose significant problems for the legal system (Wells et al., 1998; Kassin et al., 2010). So, how did the court justify its differing opinions? And what lessons can be learned from these discrepant decisions concerning how social science can influence legal decisions?
The entire article is here.
Tuesday, December 30, 2014
When Talking About Bias Backfires
By Adam Grant and Sheryl Sandberg
The New York Times - Sunday Review
Originally published December 6, 2014
Here is an excerpt:
Rather than merely informing managers that stereotypes persisted, they added that a “vast majority of people try to overcome their stereotypic preconceptions.” With this adjustment, discrimination vanished in their studies. After reading this message, managers were 28 percent more interested in working with the female candidate who negotiated assertively and judged her as 25 percent more likable.
When we communicate that a vast majority of people hold some biases, we need to make sure that we’re not legitimating prejudice. By reinforcing the idea that people want to conquer their biases and that there are benefits to doing so, we send a more effective message: Most people don’t want to discriminate, and you shouldn’t either.
The entire article is here.
Editor's note: Read the entire article and reflect on how this can influence the way in which psychologists communicate with patients.
The New York Times - Sunday Review
Originally published December 6, 2014
Here is an excerpt:
Rather than merely informing managers that stereotypes persisted, they added that a “vast majority of people try to overcome their stereotypic preconceptions.” With this adjustment, discrimination vanished in their studies. After reading this message, managers were 28 percent more interested in working with the female candidate who negotiated assertively and judged her as 25 percent more likable.
When we communicate that a vast majority of people hold some biases, we need to make sure that we’re not legitimating prejudice. By reinforcing the idea that people want to conquer their biases and that there are benefits to doing so, we send a more effective message: Most people don’t want to discriminate, and you shouldn’t either.
The entire article is here.
Editor's note: Read the entire article and reflect on how this can influence the way in which psychologists communicate with patients.
The Dark Side of Free Will
Published on Dec 9, 2014
This talk was given at a local TEDx event, produced independently of the TED Conferences. What would happen if we all believed free will didn't exist? As a free will skeptic, Dr. Gregg Caruso contends our society would be better off believing there is no such thing as free will.
This talk was given at a local TEDx event, produced independently of the TED Conferences. What would happen if we all believed free will didn't exist? As a free will skeptic, Dr. Gregg Caruso contends our society would be better off believing there is no such thing as free will.
Monday, December 29, 2014
Why We Need to Abandon the Disease-Model of Mental Health Care
By Peter Kinderman
Scientific American Blog
Originally published on November 17, 2014
Here is an excerpt:
Some neuroscientists have asserted that all emotional distress can ultimately be explained in terms of the functioning of our neural synapses and their neurotransmitter signalers. But this logic applies to all human behavior and every human emotion and it doesn’t differentiate between distress — explained as a product of chemical “imbalances” — and “normal” emotions. Moreover, while it is clear that medication (like many other substances, including drugs and alcohol) has an effect on our neurotransmitters, and therefore on our emotions and behavior, this is a long way from supporting the idea that distressing experiences are caused by imbalances in those neurotransmitters.
Many people continue to assume that serious problems such as hallucinations and delusional beliefs are quintessentially biological in origin, but we now have considerable evidence that traumatic childhood experiences (poverty, abuse, etc.) are associated with later psychotic experiences. There is an almost knee-jerk assumption that suicide, for instance, is a consequence of an underlying illness, explicable only in biological terms.
The entire blog post is here.
Scientific American Blog
Originally published on November 17, 2014
Here is an excerpt:
Some neuroscientists have asserted that all emotional distress can ultimately be explained in terms of the functioning of our neural synapses and their neurotransmitter signalers. But this logic applies to all human behavior and every human emotion and it doesn’t differentiate between distress — explained as a product of chemical “imbalances” — and “normal” emotions. Moreover, while it is clear that medication (like many other substances, including drugs and alcohol) has an effect on our neurotransmitters, and therefore on our emotions and behavior, this is a long way from supporting the idea that distressing experiences are caused by imbalances in those neurotransmitters.
Many people continue to assume that serious problems such as hallucinations and delusional beliefs are quintessentially biological in origin, but we now have considerable evidence that traumatic childhood experiences (poverty, abuse, etc.) are associated with later psychotic experiences. There is an almost knee-jerk assumption that suicide, for instance, is a consequence of an underlying illness, explicable only in biological terms.
The entire blog post is here.
Collaborating across cultures
Working with scientists from the Arab world improved my worldview, my career and my life. I urge you to collaborate with researchers from other cultures, too.
By Thomas Eissenberg, PhD
Monitor on Psychology
December 2014, Vol 45, No. 11
Print version: page 60
Working with scientists from the Arab world improved my worldview, my career and my life. I urge you to collaborate with researchers from other cultures, too.
One reason to collaborate across cultures is that many global problems — environmental degradation, disease, conflict and inequity — cannot be addressed comprehensively without global partnerships.
Yet crafting empirically based solutions with colleagues around the world involves ethical issues that extend beyond those typically considered by an Institutional Review Board (IRB). That is, rewarding and successful cross-cultural collaboration demands that partners re-dedicate themselves to basic ethical principles that involve interactions with research participants and also interactions among researchers themselves.
The entire article is here.
By Thomas Eissenberg, PhD
Monitor on Psychology
December 2014, Vol 45, No. 11
Print version: page 60
Working with scientists from the Arab world improved my worldview, my career and my life. I urge you to collaborate with researchers from other cultures, too.
One reason to collaborate across cultures is that many global problems — environmental degradation, disease, conflict and inequity — cannot be addressed comprehensively without global partnerships.
Yet crafting empirically based solutions with colleagues around the world involves ethical issues that extend beyond those typically considered by an Institutional Review Board (IRB). That is, rewarding and successful cross-cultural collaboration demands that partners re-dedicate themselves to basic ethical principles that involve interactions with research participants and also interactions among researchers themselves.
The entire article is here.
Sunday, December 28, 2014
Psychologists and psychiatrists serving as expert witnesses in court: what do they know about eyewitness memory?
Annika Melindera & Svein Magnussena
Psychology, Crime & Law
Volume 21, Issue 1, 2015, pp 53-61
Abstract
Expert witnesses have various tasks that frequently include issues of memory. We tested if expert witnesses outperform other practitioners on memory issues of high relevance to clinical practice. We surveyed psychiatrists and psychologists who reported serving as expert witnesses in court (n = 117) about their knowledge and beliefs about human memory. The results were compared to a sample of psychiatrists and psychologists who had never served as expert witnesses (n = 819). Contrary to our expectations, the professionals serving as expert witnesses did not outperform the practitioners who never served. A substantial minority of the respondents harbored scientifically unproven ideas of human memory on issues such as the memory of small children, repression of adult traumatic memories, and recovered traumatic childhood memories. We conclude that the expert witnesses are at risk of offering bad recommendations to the court in trials where reliability of eyewitness memory is at stake.
The entire article is here.
Psychology, Crime & Law
Volume 21, Issue 1, 2015, pp 53-61
Abstract
Expert witnesses have various tasks that frequently include issues of memory. We tested if expert witnesses outperform other practitioners on memory issues of high relevance to clinical practice. We surveyed psychiatrists and psychologists who reported serving as expert witnesses in court (n = 117) about their knowledge and beliefs about human memory. The results were compared to a sample of psychiatrists and psychologists who had never served as expert witnesses (n = 819). Contrary to our expectations, the professionals serving as expert witnesses did not outperform the practitioners who never served. A substantial minority of the respondents harbored scientifically unproven ideas of human memory on issues such as the memory of small children, repression of adult traumatic memories, and recovered traumatic childhood memories. We conclude that the expert witnesses are at risk of offering bad recommendations to the court in trials where reliability of eyewitness memory is at stake.
The entire article is here.
Saturday, December 27, 2014
Coaxing better behavior
Behavioral science is playing a pivotal role in research and policymaking that seeks to gently steer us toward better decisions.
By Tori DeAngelis
The Monitor on Psychology
December 2014, Vol 45, No. 11
Print version: page 62
Here is an excerpt:
But social psychology has probably never held as much potential to change global outcomes as it does now. Governments and other organizations are applying "nudge principles" — psychologists' findings about the human propensities that influence our decisions and actions — to collect unpaid taxes, reduce child mortality, and help people choose healthier foods and make better environmental choices. In one line of study, for example, researchers found that people recycled much more when their trash bin lids featured cut-out shapes of the objects to be recycled, be they circles for cans and bottles or slits for paper.
The entire article is here.
By Tori DeAngelis
The Monitor on Psychology
December 2014, Vol 45, No. 11
Print version: page 62
Here is an excerpt:
But social psychology has probably never held as much potential to change global outcomes as it does now. Governments and other organizations are applying "nudge principles" — psychologists' findings about the human propensities that influence our decisions and actions — to collect unpaid taxes, reduce child mortality, and help people choose healthier foods and make better environmental choices. In one line of study, for example, researchers found that people recycled much more when their trash bin lids featured cut-out shapes of the objects to be recycled, be they circles for cans and bottles or slits for paper.
The entire article is here.
Friday, December 26, 2014
Evolution and the American Myth of the Individual
By John Edward Terrell
The New York times - Opinion Pages
Originally posted November 30, 2014
Here is an excerpt:
When I was a boy I was taught that the Old Testament is about our relationship with God and the New Testament is about our responsibilities to one another. I now know this division of biblical wisdom is too simple. I have also learned that in the eyes of many conservative Americans today, religion and evolution do not mix. You either accept what the Bible tells us or what Charles Darwin wrote, but not both. The irony here is that when it comes to our responsibilities to one another as human beings, religion and evolution nowadays are not necessarily on opposite sides of the fence. And as Matthew D. Lieberman, a social neuroscience researcher at the University of California, Los Angeles, has written: “we think people are built to maximize their own pleasure and minimize their own pain. In reality, we are actually built to overcome our own pleasure and increase our own pain in the service of following society’s norms.”
While I do not entirely accept the norms clause of Lieberman’s claim, his observation strikes me as evocatively religious.
The entire article is here.
The New York times - Opinion Pages
Originally posted November 30, 2014
Here is an excerpt:
When I was a boy I was taught that the Old Testament is about our relationship with God and the New Testament is about our responsibilities to one another. I now know this division of biblical wisdom is too simple. I have also learned that in the eyes of many conservative Americans today, religion and evolution do not mix. You either accept what the Bible tells us or what Charles Darwin wrote, but not both. The irony here is that when it comes to our responsibilities to one another as human beings, religion and evolution nowadays are not necessarily on opposite sides of the fence. And as Matthew D. Lieberman, a social neuroscience researcher at the University of California, Los Angeles, has written: “we think people are built to maximize their own pleasure and minimize their own pain. In reality, we are actually built to overcome our own pleasure and increase our own pain in the service of following society’s norms.”
While I do not entirely accept the norms clause of Lieberman’s claim, his observation strikes me as evocatively religious.
The entire article is here.
Science, Trust And Psychology In Crisis
By Tania Lombrozo
NPR
Originally published June 2, 2014
Here is an excerpt:
Researchers who engage in p-diligence are those who engage in practices — such as additional analyses or even experiments — designed to evaluate the robustness of their results, whether or not these practices make it into print. They might, for example, analyze their data with different exclusion criteria — not to choose the criterion that makes some effect most dramatic but to make sure that any claims in the paper don't depend on this potentially arbitrary decision. They might analyze the data using two statistical methods — not to choose the single one that yields a significant result but to make sure that they both do. They might build in checks for various types of human errors and analyze uninteresting aspects of the data to make sure there's nothing weird going on, like a bug in their code.
If these additional data or analyses reveal anything problematic, p-diligent researchers will temper their claims appropriately, or pursue further investigation as needed. And they'll engage in these practices with an eye toward avoiding potential pitfalls, such as confirmation bias and the seductions of p-hacking, that could lead to systematic errors. In other words, they'll "do their p-diligence" to make sure that they — and others — should invest in their claims.
P-hacking and p-diligence have something in common: Both involve practices that aren't fully reported in publication. As a consequence, they widen the gap. But let's face it: While the gap can (and sometimes should) be narrowed, it cannot be closed.
The entire article is here.
Thanks to Ed Zuckerman for this lead.
NPR
Originally published June 2, 2014
Here is an excerpt:
Researchers who engage in p-diligence are those who engage in practices — such as additional analyses or even experiments — designed to evaluate the robustness of their results, whether or not these practices make it into print. They might, for example, analyze their data with different exclusion criteria — not to choose the criterion that makes some effect most dramatic but to make sure that any claims in the paper don't depend on this potentially arbitrary decision. They might analyze the data using two statistical methods — not to choose the single one that yields a significant result but to make sure that they both do. They might build in checks for various types of human errors and analyze uninteresting aspects of the data to make sure there's nothing weird going on, like a bug in their code.
If these additional data or analyses reveal anything problematic, p-diligent researchers will temper their claims appropriately, or pursue further investigation as needed. And they'll engage in these practices with an eye toward avoiding potential pitfalls, such as confirmation bias and the seductions of p-hacking, that could lead to systematic errors. In other words, they'll "do their p-diligence" to make sure that they — and others — should invest in their claims.
P-hacking and p-diligence have something in common: Both involve practices that aren't fully reported in publication. As a consequence, they widen the gap. But let's face it: While the gap can (and sometimes should) be narrowed, it cannot be closed.
The entire article is here.
Thanks to Ed Zuckerman for this lead.
Thursday, December 25, 2014
Effects of biological explanations for mental disorders on clinicians’ empathy
By Matthew S. Lebowitz and Woo-kyoung Ahn
Effects of biological explanations for mental disorders on clinicians’ empathy
PNAS 2014 : 1414058111v1-201414058
Abstract
Mental disorders are increasingly understood in terms of biological mechanisms. We examined how such biological explanations of patients’ symptoms would affect mental health clinicians’ empathy—a crucial component of the relationship between treatment-providers and patients—as well as their clinical judgments and recommendations. In a series of studies, US clinicians read descriptions of potential patients whose symptoms were explained using either biological or psychosocial information. Biological explanations have been thought to make patients appear less accountable for their disorders, which could increase clinicians’ empathy. To the contrary, biological explanations evoked significantly less empathy. These results are consistent with other research and theory that has suggested that biological accounts of psychopathology can exacerbate perceptions of patients as abnormal, distinct from the rest of the population, meriting social exclusion, and even less than fully human. Although the ongoing shift toward biomedical conceptualizations has many benefits, our results reveal unintended negative consequences.
Significance
Mental disorders are increasingly understood biologically. We tested the effects of biological explanations among mental health clinicians, specifically examining their empathy toward patients. Conventional wisdom suggests that biological explanations reduce perceived blameworthiness against those with mental disorders, which could increase empathy. Yet, conceptualizing mental disorders biologically can cast patients as physiologically different from “normal” people and as governed by genetic or neurochemical abnormalities instead of their own human agency, which can engender negative social attitudes and dehumanization. This suggests that biological explanations might actually decrease empathy. Indeed, we find that biological explanations significantly reduce clinicians’ empathy. This is alarming because clinicians’ empathy is important for the therapeutic alliance between mental health providers and patients and significantly predicts positive clinical outcomes.
The entire article is here.
Effects of biological explanations for mental disorders on clinicians’ empathy
PNAS 2014 : 1414058111v1-201414058
Abstract
Mental disorders are increasingly understood in terms of biological mechanisms. We examined how such biological explanations of patients’ symptoms would affect mental health clinicians’ empathy—a crucial component of the relationship between treatment-providers and patients—as well as their clinical judgments and recommendations. In a series of studies, US clinicians read descriptions of potential patients whose symptoms were explained using either biological or psychosocial information. Biological explanations have been thought to make patients appear less accountable for their disorders, which could increase clinicians’ empathy. To the contrary, biological explanations evoked significantly less empathy. These results are consistent with other research and theory that has suggested that biological accounts of psychopathology can exacerbate perceptions of patients as abnormal, distinct from the rest of the population, meriting social exclusion, and even less than fully human. Although the ongoing shift toward biomedical conceptualizations has many benefits, our results reveal unintended negative consequences.
Significance
Mental disorders are increasingly understood biologically. We tested the effects of biological explanations among mental health clinicians, specifically examining their empathy toward patients. Conventional wisdom suggests that biological explanations reduce perceived blameworthiness against those with mental disorders, which could increase empathy. Yet, conceptualizing mental disorders biologically can cast patients as physiologically different from “normal” people and as governed by genetic or neurochemical abnormalities instead of their own human agency, which can engender negative social attitudes and dehumanization. This suggests that biological explanations might actually decrease empathy. Indeed, we find that biological explanations significantly reduce clinicians’ empathy. This is alarming because clinicians’ empathy is important for the therapeutic alliance between mental health providers and patients and significantly predicts positive clinical outcomes.
The entire article is here.
Wednesday, December 24, 2014
What do Philosophers of Mind Actually do: Some Quantitative Data
By Joshua Knobe
The Brains Blog
Originally published December 5, 2014
There seems to be a widely shared sense these days that the philosophical study of mind has been undergoing some pretty dramatic changes. Back in the twentieth century, the field was dominated by a very specific sort of research program, but it seems like less and less work is being done within that traditional program, while there is an ever greater amount of work pursuing issues that have a completely different sort of character.
To get a better sense for precisely how the field has changed, I thought it might be helpful to collect some quantitative data. Specifically, I compared a sample of highly cited papers from the past five years (2009-2013) with a sample of highly cited papers from a period in the twentieth century (1960-1999). You can find all of the nitty gritty details in this forthcoming paper, but the basic results are pretty easy to summarize.
The entire blog post is here.
The Brains Blog
Originally published December 5, 2014
There seems to be a widely shared sense these days that the philosophical study of mind has been undergoing some pretty dramatic changes. Back in the twentieth century, the field was dominated by a very specific sort of research program, but it seems like less and less work is being done within that traditional program, while there is an ever greater amount of work pursuing issues that have a completely different sort of character.
To get a better sense for precisely how the field has changed, I thought it might be helpful to collect some quantitative data. Specifically, I compared a sample of highly cited papers from the past five years (2009-2013) with a sample of highly cited papers from a period in the twentieth century (1960-1999). You can find all of the nitty gritty details in this forthcoming paper, but the basic results are pretty easy to summarize.
The entire blog post is here.
Don't Execute Schizophrenic Killers
By Sally L. Satel
Bloomberg View
Originally posted December 1, 2014
Is someone who was diagnosed with schizophrenia years before committing murder sane enough to be sentenced to death?
The government thinks so in the case of Scott L. Panetti, 56, who will die on Wednesday by lethal injection in Texas unless Governor Rick Perry stays the execution.
(cut)
This is unjust. It is wrong to execute, even to punish, people who are so floridly psychotic when they commit their crimes that they are incapable of correcting the errors by logic or evidence.
Yet Texas, like many other states, considers a defendant sane as long as he knows, factually, that murder is wrong. Indeed, Panetti’s jury, which was instructed to apply this narrow standard, may have been legally correct to reject his insanity defense because he may have known that the murders were technically wrong.
The entire article is here.
Bloomberg View
Originally posted December 1, 2014
Is someone who was diagnosed with schizophrenia years before committing murder sane enough to be sentenced to death?
The government thinks so in the case of Scott L. Panetti, 56, who will die on Wednesday by lethal injection in Texas unless Governor Rick Perry stays the execution.
(cut)
This is unjust. It is wrong to execute, even to punish, people who are so floridly psychotic when they commit their crimes that they are incapable of correcting the errors by logic or evidence.
Yet Texas, like many other states, considers a defendant sane as long as he knows, factually, that murder is wrong. Indeed, Panetti’s jury, which was instructed to apply this narrow standard, may have been legally correct to reject his insanity defense because he may have known that the murders were technically wrong.
The entire article is here.
Tuesday, December 23, 2014
Self-Driving Cars: Safer, but What of Their Morals
By Justin Pritchard
Associated Press
Originally posted November 19, 2014
Here is an excerpt:
"This is one of the most profoundly serious decisions we can make. Program a machine that can foreseeably lead to someone's death," said Lin. "When we make programming decisions, we expect those to be as right as we can be."
What right looks like may differ from company to company, but according to Lin automakers have a duty to show that they have wrestled with these complex questions and publicly reveal the answers they reach.
The entire article is here.
Associated Press
Originally posted November 19, 2014
Here is an excerpt:
"This is one of the most profoundly serious decisions we can make. Program a machine that can foreseeably lead to someone's death," said Lin. "When we make programming decisions, we expect those to be as right as we can be."
What right looks like may differ from company to company, but according to Lin automakers have a duty to show that they have wrestled with these complex questions and publicly reveal the answers they reach.
The entire article is here.
Harm to others outweighs harm to self in moral decision making
Molly J. Crockett, Zeb Kurth-Nelson, Jenifer Z. Siegel, Peter Dayand, and Raymond J. Dolan
PNAS 2014 ; published ahead of print November 17, 2014, doi:10.1073/pnas.1408988111
Abstract
Concern for the suffering of others is central to moral decision making. How humans evaluate others’ suffering, relative to their own suffering, is unknown. We investigated this question by inviting subjects to trade off profits for themselves against pain experienced either by themselves or an anonymous other person. Subjects made choices between different amounts of money and different numbers of painful electric shocks. We independently varied the recipient of the shocks (self vs. other) and whether the choice involved paying to decrease pain or profiting by increasing pain. We built computational models to quantify the relative values subjects ascribed to pain for themselves and others in this setting. In two studies we show that most people valued others’ pain more than their own pain. This was evident in a willingness to pay more to reduce others’ pain than their own and a requirement for more compensation to increase others’ pain relative to their own. This ‟hyperaltruistic” valuation of others’ pain was linked to slower responding when making decisions that affected others, consistent with an engagement of deliberative processes in moral decision making. Subclinical psychopathic traits correlated negatively with aversion to pain for both self and others, in line with reports of aversive processing deficits in psychopathy. Our results provide evidence for a circumstance in which people care more for others than themselves. Determining the precise boundaries of this surprisingly prosocial disposition has implications for understanding human moral decision making and its disturbance in antisocial behavior.
Significance
Concern for the welfare of others is a key component of moral decision making and is disturbed in antisocial and criminal behavior. However, little is known about how people evaluate the costs of others’ suffering. Past studies have examined people’s judgments in hypothetical scenarios, but there is evidence that hypothetical judgments cannot accurately predict actual behavior. Here we addressed this issue by measuring how much money people will sacrifice to reduce the number of painful electric shocks delivered to either themselves or an anonymous stranger. Surprisingly, most people sacrifice more money to reduce a stranger’s pain than their own pain. This finding may help us better understand how people resolve moral dilemmas that commonly arise in medical, legal, and political decision making.
The entire article is here.
PNAS 2014 ; published ahead of print November 17, 2014, doi:10.1073/pnas.1408988111
Abstract
Concern for the suffering of others is central to moral decision making. How humans evaluate others’ suffering, relative to their own suffering, is unknown. We investigated this question by inviting subjects to trade off profits for themselves against pain experienced either by themselves or an anonymous other person. Subjects made choices between different amounts of money and different numbers of painful electric shocks. We independently varied the recipient of the shocks (self vs. other) and whether the choice involved paying to decrease pain or profiting by increasing pain. We built computational models to quantify the relative values subjects ascribed to pain for themselves and others in this setting. In two studies we show that most people valued others’ pain more than their own pain. This was evident in a willingness to pay more to reduce others’ pain than their own and a requirement for more compensation to increase others’ pain relative to their own. This ‟hyperaltruistic” valuation of others’ pain was linked to slower responding when making decisions that affected others, consistent with an engagement of deliberative processes in moral decision making. Subclinical psychopathic traits correlated negatively with aversion to pain for both self and others, in line with reports of aversive processing deficits in psychopathy. Our results provide evidence for a circumstance in which people care more for others than themselves. Determining the precise boundaries of this surprisingly prosocial disposition has implications for understanding human moral decision making and its disturbance in antisocial behavior.
Significance
Concern for the welfare of others is a key component of moral decision making and is disturbed in antisocial and criminal behavior. However, little is known about how people evaluate the costs of others’ suffering. Past studies have examined people’s judgments in hypothetical scenarios, but there is evidence that hypothetical judgments cannot accurately predict actual behavior. Here we addressed this issue by measuring how much money people will sacrifice to reduce the number of painful electric shocks delivered to either themselves or an anonymous stranger. Surprisingly, most people sacrifice more money to reduce a stranger’s pain than their own pain. This finding may help us better understand how people resolve moral dilemmas that commonly arise in medical, legal, and political decision making.
The entire article is here.
Monday, December 22, 2014
Episode 18: Critical Incidents and Psychologist Safety
If you have missed AM radio, then you will appreciate this episode. John experiments with conference call software with his guests to discuss ethics and safety from a psychologist's point of view. I apologize about the squeaks (Shannon's phone), scratches and other recording imperfections.
John's guests include Dr. Don McAleer, psychologist, gun owner, firearms instructor, firearm collector; Massad Ayoob, an international firearms and self-defense instructor, expert in lethal force encounters and shooting cases, and author; and, Dr. Shannon Clark, psychologist, FBI agent, active shooter and response instructor, and lifelong martial artist.
Massad Ayoob: In the Gravest Extreme: The Role of the Firearm in Personal Protection
Massad Ayoob: Deadly Force: Understanding Your Right to Self Defense
John's guests include Dr. Don McAleer, psychologist, gun owner, firearms instructor, firearm collector; Massad Ayoob, an international firearms and self-defense instructor, expert in lethal force encounters and shooting cases, and author; and, Dr. Shannon Clark, psychologist, FBI agent, active shooter and response instructor, and lifelong martial artist.
We discuss the ethics of nonmaleficence (do no harm) versus personal safety. It is no secret that psychologists are vulnerable to threat, assault, and stalking from patients and family members of patients. The overarching goal is to start a discussion for psychologists and mental health professionals about potential dangers for mental health professionals and some options to help keep them safe.
Click here to earn 1 APA-approved CE credit
Click here to earn 1 APA-approved CE credit
At the end of this podcast, the listener will be able to:
1. Outline your personal values related to safety in your professional life.
2. List the options to enhance personal safety in your office.
3. Describe several responsible steps to take if you decide to carry a firearm or house one in your office.
1. Outline your personal values related to safety in your professional life.
2. List the options to enhance personal safety in your office.
3. Describe several responsible steps to take if you decide to carry a firearm or house one in your office.
Reading Material
Ken Pope: Therapists' Resources for Threats, Stalking, or Assaults by Patients
Robert B. Young: When a Psychiatrist Shoots to Kill
Dave Grossman: On Killing: The Psychological Cost of Learning to Kill in War and Society
Robert B. Young: When a Psychiatrist Shoots to Kill
Dave Grossman: On Killing: The Psychological Cost of Learning to Kill in War and Society
Massad Ayoob: In the Gravest Extreme: The Role of the Firearm in Personal Protection
Massad Ayoob: Deadly Force: Understanding Your Right to Self Defense
Massad Ayoob Information
Sunday, December 21, 2014
‘‘End-of-life” biases in moral evaluations of others
By George E. Newman, Kristi L. Lockhart, Frank C. Keil
Cognition, in press
Abstract
When evaluating the moral character of others, people show a strong bias to more heavily weigh behaviors at the end of an individual’s life, even if those behaviors arise in light of an overwhelmingly longer duration of contradictory behavior. Across four experiments, we find that this ‘‘end-of-life” bias uniquely applies to intentional changes in behavior that immediately precede death, and appears to result from the inference that the behavioral change reflects the emergence of the individual’s ‘‘true self”.
The entire article is here.
Cognition, in press
Abstract
When evaluating the moral character of others, people show a strong bias to more heavily weigh behaviors at the end of an individual’s life, even if those behaviors arise in light of an overwhelmingly longer duration of contradictory behavior. Across four experiments, we find that this ‘‘end-of-life” bias uniquely applies to intentional changes in behavior that immediately precede death, and appears to result from the inference that the behavioral change reflects the emergence of the individual’s ‘‘true self”.
The entire article is here.
Saturday, December 20, 2014
Bioethics in 2025: what will be the challenges?
Deborah Bowman, Professor of Bioethics, Clinical Ethics and Medical Law at St. George’s University of London
Sarah Chan, Research Fellow in Bioethics and Law and Deputy Director of the Institute for Science, Ethics and Innovation at the University of Manchester
Molly Crockett, Associate Professor of Experimental Psychology at the University of Oxford
Gill Haddow, Senior Research Fellow in Science, Technology and Innovation Studies at the University of Edinburgh
For its 2014 annual public lecture, the Nuffield Council on Bioethics had four speakers from different disciplines present their take on what will be the main challenges in and for bioethics in the near future. Topics touched on included how to make bioethics more open and inclusive as a discipline; what role for bioethicists in meeting future societal challenges; whether we will be able to develop a 'morality pill' in near future; and how it might feel for people to have electronic or other material transplanted into them in the future to help their bodies cope with longer lives.
Sarah Chan, Research Fellow in Bioethics and Law and Deputy Director of the Institute for Science, Ethics and Innovation at the University of Manchester
Molly Crockett, Associate Professor of Experimental Psychology at the University of Oxford
Gill Haddow, Senior Research Fellow in Science, Technology and Innovation Studies at the University of Edinburgh
For its 2014 annual public lecture, the Nuffield Council on Bioethics had four speakers from different disciplines present their take on what will be the main challenges in and for bioethics in the near future. Topics touched on included how to make bioethics more open and inclusive as a discipline; what role for bioethicists in meeting future societal challenges; whether we will be able to develop a 'morality pill' in near future; and how it might feel for people to have electronic or other material transplanted into them in the future to help their bodies cope with longer lives.
Friday, December 19, 2014
Is it okay to vet candidates on social media during recruitment?
By Science Daily
Originally posted December 8, 2014
Summary
The practice of cybervetting potential employees online as part of the recruitment process is the focus of recent study. Is such surveillance an unethical invasion of privacy? Or, is it simply a way for employers to enhance their review of formal credentials to ensure a good person-environment fit? The authors explore the legitimacy and outcomes of this practice following interviews with 45 recruiting managers.
The entire article is here.
Originally posted December 8, 2014
Summary
The practice of cybervetting potential employees online as part of the recruitment process is the focus of recent study. Is such surveillance an unethical invasion of privacy? Or, is it simply a way for employers to enhance their review of formal credentials to ensure a good person-environment fit? The authors explore the legitimacy and outcomes of this practice following interviews with 45 recruiting managers.
The entire article is here.
The effects of punishment and appeals for honesty on children’s truth-telling behavior
By Victoria Talwar, Cindy Arruda, Sarah Yachison
Journal of Experimental Child Psychology
Volume 130, February 2015, Pages 209–217
Abstract
This study examined the effectiveness of two types of verbal appeals (external and internal motivators) and expected punishment in 372 children’s (4- to 8-year-olds) truth-telling behavior about a transgression. External appeals to tell the truth emphasized social approval by stating that the experimenter would be happy if the children told the truth. Internal appeals to tell the truth emphasized internal standards of behavior by stating that the children would be happy with themselves if they told the truth. Results indicate that with age children are more likely to lie and maintain their lie during follow-up questioning. Overall, children in the External Appeal conditions told the truth significantly more compared with children in the No Appeal conditions. Children who heard internal appeals with no expected punishment were significantly less likely to lie compared with children who heard internal appeals when there was expected punishment. The results have important implications regarding the impact of socialization on children’s honesty and promoting children’s veracity in applied situations where children’s honesty is critical.
Highlights
• The effectiveness of verbal appeals and punishment on children’s honesty was examined.
• External appeals emphasized the importance of truth-telling for social approval.
• Internal appeals emphasized internal standards of behavior.
• Overall children in the external appeal conditions were least likely to lie.
• The efficacy of internal appeals was attenuated by expected punishment.
The entire article is here.
Journal of Experimental Child Psychology
Volume 130, February 2015, Pages 209–217
Abstract
This study examined the effectiveness of two types of verbal appeals (external and internal motivators) and expected punishment in 372 children’s (4- to 8-year-olds) truth-telling behavior about a transgression. External appeals to tell the truth emphasized social approval by stating that the experimenter would be happy if the children told the truth. Internal appeals to tell the truth emphasized internal standards of behavior by stating that the children would be happy with themselves if they told the truth. Results indicate that with age children are more likely to lie and maintain their lie during follow-up questioning. Overall, children in the External Appeal conditions told the truth significantly more compared with children in the No Appeal conditions. Children who heard internal appeals with no expected punishment were significantly less likely to lie compared with children who heard internal appeals when there was expected punishment. The results have important implications regarding the impact of socialization on children’s honesty and promoting children’s veracity in applied situations where children’s honesty is critical.
Highlights
• The effectiveness of verbal appeals and punishment on children’s honesty was examined.
• External appeals emphasized the importance of truth-telling for social approval.
• Internal appeals emphasized internal standards of behavior.
• Overall children in the external appeal conditions were least likely to lie.
• The efficacy of internal appeals was attenuated by expected punishment.
The entire article is here.
Thursday, December 18, 2014
Prosecutor questions ethics of Jodi Arias witness
By Megan Cassidy
The Arizona Republic via USA Today
Originally published November 26, 2014
Prosecutor Juan Martinez on Tuesday continued his steady drum of implications and accusations against a defense expert for Jodi Arias in an attempt to discredit favorable testimony for the convicted killer.
Psychologist L.C. Miccio-Fonseca examined the sexual relationship between Arias and victim Travis Alexander, Arias' sometimes lover.
The entire article is here.
The Arizona Republic via USA Today
Originally published November 26, 2014
Prosecutor Juan Martinez on Tuesday continued his steady drum of implications and accusations against a defense expert for Jodi Arias in an attempt to discredit favorable testimony for the convicted killer.
Psychologist L.C. Miccio-Fonseca examined the sexual relationship between Arias and victim Travis Alexander, Arias' sometimes lover.
The entire article is here.
Value Judgments and the True Self
By George E. Newman, Paul Bloom, & Joshua Knobe
Pers Soc Psychol Bull February 2014 vol. 40 no. 2 203-216
Abstract
The belief that individuals have a “true self” plays an important role in many areas of psychology as well as everyday life. The present studies demonstrate that people have a general tendency to conclude that the true self is fundamentally good—that is, that deep inside every individual, there is something motivating him or her to behave in ways that are virtuous. Study 1 finds that observers are more likely to see a person’s true self reflected in behaviors they deem to be morally good than in behaviors they deem to be bad. Study 2 replicates this effect and demonstrates observers’ own moral values influence what they judge to be another person’s true self. Finally, Study 3 finds that this normative view of the true self is independent of the particular type of mental state (beliefs vs. feelings) that is seen as responsible for an agent’s behavior.
The entire article is here.
Pers Soc Psychol Bull February 2014 vol. 40 no. 2 203-216
Abstract
The belief that individuals have a “true self” plays an important role in many areas of psychology as well as everyday life. The present studies demonstrate that people have a general tendency to conclude that the true self is fundamentally good—that is, that deep inside every individual, there is something motivating him or her to behave in ways that are virtuous. Study 1 finds that observers are more likely to see a person’s true self reflected in behaviors they deem to be morally good than in behaviors they deem to be bad. Study 2 replicates this effect and demonstrates observers’ own moral values influence what they judge to be another person’s true self. Finally, Study 3 finds that this normative view of the true self is independent of the particular type of mental state (beliefs vs. feelings) that is seen as responsible for an agent’s behavior.
The entire article is here.
Wednesday, December 17, 2014
APA names lawyer to examine claims it aided U.S. government in shielding psychologists who tortured prisoners
By John Bohannon
Science Magazine
Originally published November 17, 2014
The American Psychological Association (APA) last week named a former federal prosecutor to lead an investigation into its role in supporting the U.S. government’s interrogation of suspected terrorists.
A new book by reporter James Risen of The New York Times alleges that APA, the largest U.S. professional association of psychologists, bent its ethical guidelines to give psychologists permission to conduct such interrogations at the U.S. military base at Guantánamo Bay, Cuba, and elsewhere. The motivation, according to Risen, was to stay in the good graces of U.S. intelligence and defense officials. APA has denied the allegations and says that it worked closely with the CIA and the Pentagon "to ensure that national security policies were well-informed by empirical science."
The entire article is here.
Science Magazine
Originally published November 17, 2014
The American Psychological Association (APA) last week named a former federal prosecutor to lead an investigation into its role in supporting the U.S. government’s interrogation of suspected terrorists.
A new book by reporter James Risen of The New York Times alleges that APA, the largest U.S. professional association of psychologists, bent its ethical guidelines to give psychologists permission to conduct such interrogations at the U.S. military base at Guantánamo Bay, Cuba, and elsewhere. The motivation, according to Risen, was to stay in the good graces of U.S. intelligence and defense officials. APA has denied the allegations and says that it worked closely with the CIA and the Pentagon "to ensure that national security policies were well-informed by empirical science."
The entire article is here.
(Un)just Deserts: The Dark Side of Moral Responsibility
By Gregg D. Caruso
What would be the consequence of embracing skepticism about free will and/or desert-based moral responsibility? What if we came to disbelieve in moral responsibility? What would this mean for our interpersonal relationships, society, morality, meaning, and the law? What would it do to our standing as human beings? Would it cause nihilism and despair as some maintain? Or perhaps increase anti-social behavior as some recent studies have suggested (Vohs and Schooler 2008; Baumeister, Masicampo,and DeWall 2009)? Or would it rather have a humanizing effect on our practices and policies, freeing us from the negative effects of what Bruce Waller calls the “moral responsibility system” (2014, p. 4)? These questions are of profound pragmatic importance and should be of interest
independent of the metaphysical debate over free will. As public proclamations of skepticism continue to rise, and as the mass media continues to run headlines announcing free will and moral responsibility are illusions, we need to ask what effects this will have on the general public and what
the responsibility is of professionals.
In recent years a small industry has actually grown up around precisely these questions. In the skeptical community, for example, a number of different positions have been developed and advanced—including Saul Smilansky’s illusionism (2000), Thomas Nadelhoffer’s disillusionism
(2011), Shaun Nichols’ anti-revolution (2007), and the optimistic skepticism of Derk Pereboom (2001, 2013a, 2013b), Bruce Waller (2011), TamlerSommers (2005, 2007), and others.
The entire article is here.
What would be the consequence of embracing skepticism about free will and/or desert-based moral responsibility? What if we came to disbelieve in moral responsibility? What would this mean for our interpersonal relationships, society, morality, meaning, and the law? What would it do to our standing as human beings? Would it cause nihilism and despair as some maintain? Or perhaps increase anti-social behavior as some recent studies have suggested (Vohs and Schooler 2008; Baumeister, Masicampo,and DeWall 2009)? Or would it rather have a humanizing effect on our practices and policies, freeing us from the negative effects of what Bruce Waller calls the “moral responsibility system” (2014, p. 4)? These questions are of profound pragmatic importance and should be of interest
independent of the metaphysical debate over free will. As public proclamations of skepticism continue to rise, and as the mass media continues to run headlines announcing free will and moral responsibility are illusions, we need to ask what effects this will have on the general public and what
the responsibility is of professionals.
In recent years a small industry has actually grown up around precisely these questions. In the skeptical community, for example, a number of different positions have been developed and advanced—including Saul Smilansky’s illusionism (2000), Thomas Nadelhoffer’s disillusionism
(2011), Shaun Nichols’ anti-revolution (2007), and the optimistic skepticism of Derk Pereboom (2001, 2013a, 2013b), Bruce Waller (2011), TamlerSommers (2005, 2007), and others.
The entire article is here.
Tuesday, December 16, 2014
The Shrinking World of Ideas
By Arthur Krystal
The Chronicle Review
Originally posted November 21, 2014
Here is an excerpt:
For instance, psychologists and legal scholars, spurred by brain research and sophisticated brain-scanning techniques, have begun to reconsider ideas about volition. If all behavior has an electrochemical component, then in what sense—psychological, legal, moral—is a person responsible for his actions? Joshua Greene and Jonathan Cohen in a famous 2004 paper contend that neuroscience has put a new spin on free will and culpability: It "can help us see that all behavior is mechanical, that all behavior is produced by chains of physical events that ultimately reach back to forces beyond the agent’s control." Their hope is that the courts will ultimately discard blame-based punishment in favor of more "consequentialist approaches."
All this emphasis on the biological basis of human behavior is not to everyone’s liking. The British philosopher Roger Scruton, for one, takes exception to the notion that neuroscience can explain us to ourselves. He rejects the thought that the structure of the brain also structures the person, since an important distinction exists between an event in the brain and the behavior that follows. And, by the same token, the firing of neurons does not in a strictly causal sense account for identity, since a "person" is not identical to his or her physiological components.
The entire article is here.
The Chronicle Review
Originally posted November 21, 2014
Here is an excerpt:
For instance, psychologists and legal scholars, spurred by brain research and sophisticated brain-scanning techniques, have begun to reconsider ideas about volition. If all behavior has an electrochemical component, then in what sense—psychological, legal, moral—is a person responsible for his actions? Joshua Greene and Jonathan Cohen in a famous 2004 paper contend that neuroscience has put a new spin on free will and culpability: It "can help us see that all behavior is mechanical, that all behavior is produced by chains of physical events that ultimately reach back to forces beyond the agent’s control." Their hope is that the courts will ultimately discard blame-based punishment in favor of more "consequentialist approaches."
All this emphasis on the biological basis of human behavior is not to everyone’s liking. The British philosopher Roger Scruton, for one, takes exception to the notion that neuroscience can explain us to ourselves. He rejects the thought that the structure of the brain also structures the person, since an important distinction exists between an event in the brain and the behavior that follows. And, by the same token, the firing of neurons does not in a strictly causal sense account for identity, since a "person" is not identical to his or her physiological components.
The entire article is here.
Core Values Versus Common Sense Consequentialist Views Appear Less Rooted in Morality
By Tamar Kreps and Knight Way
Personality and Social Psychology Bulletin
November 2014 vol. 40 no. 11 1529-1542
Abstract
When a speaker presents an opinion, an important factor in audiences’ reactions is whether the speaker seems to be basing his or her decision on ethical (as opposed to more pragmatic) concerns. We argue that, despite a consequentialist philosophical tradition that views utilitarian consequences as the basis for moral reasoning, lay perceivers think that speakers using arguments based on consequences do not construe the issue as a moral one. Five experiments show that, for both political views (including real State of the Union quotations) and organizational policies, consequentialist views are seen to express less moralization than deontological views, and even sometimes than views presented with no explicit justification. We also demonstrate that perceived moralization in turn affects speakers’ perceived commitment to the issue and authenticity. These findings shed light on lay conceptions of morality and have practical implications for people considering how to express moral opinions publicly.
The entire article is here.
Personality and Social Psychology Bulletin
November 2014 vol. 40 no. 11 1529-1542
Abstract
When a speaker presents an opinion, an important factor in audiences’ reactions is whether the speaker seems to be basing his or her decision on ethical (as opposed to more pragmatic) concerns. We argue that, despite a consequentialist philosophical tradition that views utilitarian consequences as the basis for moral reasoning, lay perceivers think that speakers using arguments based on consequences do not construe the issue as a moral one. Five experiments show that, for both political views (including real State of the Union quotations) and organizational policies, consequentialist views are seen to express less moralization than deontological views, and even sometimes than views presented with no explicit justification. We also demonstrate that perceived moralization in turn affects speakers’ perceived commitment to the issue and authenticity. These findings shed light on lay conceptions of morality and have practical implications for people considering how to express moral opinions publicly.
The entire article is here.
Monday, December 15, 2014
Trait Positive Emotion Is Associated with Increased Self-Reported Empathy but Decreased Empathic Performance
By Hillary C. Devlin, Jamil Zaki, Desmond C. Ong, and June Gruber
Published: October 29, 2014DOI: 10.1371/journal.pone.0110470
Abstract
How is positive emotion associated with our ability to empathize with others? Extant research provides support for two competing predictions about this question. An empathy amplification hypothesis suggests positive emotion would be associated with greater empathy, as it often enhances other prosocial processes. A contrasting empathy attenuation hypothesis suggests positive emotion would be associated with lower empathy, because positive emotion promotes self-focused or antisocial behaviors. The present investigation tested these competing perspectives by examining associations between dispositional positive emotion and both subjective (i.e., self-report) and objective (i.e., task performance) measures of empathy. Findings revealed that although trait positive emotion was associated with increased subjective beliefs about empathic tendencies, it was associated with both increases and decreases in task-based empathic performance depending on the target’s emotional state. More specifically, trait positive emotion was linked to lower overall empathic accuracy toward a high-intensity negative target, but also a higher sensitivity to emotion upshifts (i.e., shifts in emotion from negative to positive) toward positive targets. This suggests that trait positive affect may be associated with decreased objective empathy in the context of mood incongruent (i.e., negative) emotional stimuli, but may increase some aspects of empathic performance in the context of mood congruent (i.e., positive) stimuli. Taken together, these findings suggest that trait positive emotion engenders a compelling subjective-objective gap regarding its association with empathy, in being related to a heightened perception of empathic tendencies, despite being linked to mixed abilities in regards to empathic performance.
The entire article is here.
Published: October 29, 2014DOI: 10.1371/journal.pone.0110470
Abstract
How is positive emotion associated with our ability to empathize with others? Extant research provides support for two competing predictions about this question. An empathy amplification hypothesis suggests positive emotion would be associated with greater empathy, as it often enhances other prosocial processes. A contrasting empathy attenuation hypothesis suggests positive emotion would be associated with lower empathy, because positive emotion promotes self-focused or antisocial behaviors. The present investigation tested these competing perspectives by examining associations between dispositional positive emotion and both subjective (i.e., self-report) and objective (i.e., task performance) measures of empathy. Findings revealed that although trait positive emotion was associated with increased subjective beliefs about empathic tendencies, it was associated with both increases and decreases in task-based empathic performance depending on the target’s emotional state. More specifically, trait positive emotion was linked to lower overall empathic accuracy toward a high-intensity negative target, but also a higher sensitivity to emotion upshifts (i.e., shifts in emotion from negative to positive) toward positive targets. This suggests that trait positive affect may be associated with decreased objective empathy in the context of mood incongruent (i.e., negative) emotional stimuli, but may increase some aspects of empathic performance in the context of mood congruent (i.e., positive) stimuli. Taken together, these findings suggest that trait positive emotion engenders a compelling subjective-objective gap regarding its association with empathy, in being related to a heightened perception of empathic tendencies, despite being linked to mixed abilities in regards to empathic performance.
The entire article is here.
Implicit Bias and Moral Responsibility: Probing the Data.
By Neil Levy
Abstract
Psychological research strongly suggests that many people harbor implicit attitudes that
diverge from their explicit attitudes, and that under some conditions these people can be
expected to perform actions that owe their moral character to the agent’s implicit attitudes. In
this paper, I pursue the question whether agents are morally responsible for these actions by
probing the available evidence concerning the kind of representation an implicit attitude is.
Building on previous work, I argue that the reduction in the degree and kind of reasons sensitivity
these attitudes display undermines agents’ responsibility-level control over the moral
character of actions. I also argue that these attitudes do not fully belong to agents’ real selves in
ways that would justify holding them responsible on accounts that centre on attributability.
The entire article is here.
Abstract
Psychological research strongly suggests that many people harbor implicit attitudes that
diverge from their explicit attitudes, and that under some conditions these people can be
expected to perform actions that owe their moral character to the agent’s implicit attitudes. In
this paper, I pursue the question whether agents are morally responsible for these actions by
probing the available evidence concerning the kind of representation an implicit attitude is.
Building on previous work, I argue that the reduction in the degree and kind of reasons sensitivity
these attitudes display undermines agents’ responsibility-level control over the moral
character of actions. I also argue that these attitudes do not fully belong to agents’ real selves in
ways that would justify holding them responsible on accounts that centre on attributability.
The entire article is here.
Sunday, December 14, 2014
Privacy and Information Technology
By Jeroen van den Hoven, Martijn Blaauw, Wolter Pieters, and Martijn Warnier
The Stanford Encyclopedia of Philosophy (Winter 2014 Edition), Edward N. Zalta (ed.)
Human beings value their privacy and the protection of their personal sphere of life. They value some control over who knows what about them. They certainly do not want their personal information to be accessible to just anyone at any time. But recent advances in information technology threaten privacy and have reduced the amount of control over personal data and open up the possibility of a range of negative consequences as a result of access to personal data. The 21st century has become the century of Big Data and advanced Information Technology allows for the storage and processing of exabytes of data. The revelations of Edward Snowden have demonstrated that these worries are real and that the technical capabilities to collect, store and search large quantities of data concerning telephone conversations, internet searches and electronic payment are now in place and are routinely used by government agencies. For business firms, personal data about customers and potential customers are now also a key asset. At the same time, the meaning and value of privacy remains the subject of considerable controversy. The combination of increasing power of new technology and the declining clarity and agreement on privacy give rise to problems concerning law, policy and ethics. The focus of this article is on exploring the relationship between information technology (IT) and privacy. We will both illustrate the specific threats that IT and innovations in IT pose for privacy, and indicate how IT itself might be able to overcome these privacy concerns by being developed in a ‘privacy-sensitive way’. We will also discuss the role of emerging technologies in the debate, and account for the way in which moral debates are themselves affected by IT.
The entire entry is here.
The Stanford Encyclopedia of Philosophy (Winter 2014 Edition), Edward N. Zalta (ed.)
Human beings value their privacy and the protection of their personal sphere of life. They value some control over who knows what about them. They certainly do not want their personal information to be accessible to just anyone at any time. But recent advances in information technology threaten privacy and have reduced the amount of control over personal data and open up the possibility of a range of negative consequences as a result of access to personal data. The 21st century has become the century of Big Data and advanced Information Technology allows for the storage and processing of exabytes of data. The revelations of Edward Snowden have demonstrated that these worries are real and that the technical capabilities to collect, store and search large quantities of data concerning telephone conversations, internet searches and electronic payment are now in place and are routinely used by government agencies. For business firms, personal data about customers and potential customers are now also a key asset. At the same time, the meaning and value of privacy remains the subject of considerable controversy. The combination of increasing power of new technology and the declining clarity and agreement on privacy give rise to problems concerning law, policy and ethics. The focus of this article is on exploring the relationship between information technology (IT) and privacy. We will both illustrate the specific threats that IT and innovations in IT pose for privacy, and indicate how IT itself might be able to overcome these privacy concerns by being developed in a ‘privacy-sensitive way’. We will also discuss the role of emerging technologies in the debate, and account for the way in which moral debates are themselves affected by IT.
The entire entry is here.
Saturday, December 13, 2014
If Everything Is Getting Better, Why Do We Remain So Pessimistic?
By the Cato Institute
Featuring Steven Pinker, Johnstone Family Professor of Psychology, Harvard University; with comments by Brink Lindsey, Vice President for Research, Cato Institute; and Charles Kenny, Senior Fellow, Center for Global Development
Originally posted November 19, 2014
Evidence from academic institutions and international organizations shows dramatic improvements in human well-being. These improvements are especially striking in the developing world. Unfortunately, there is often a wide gap between reality and public perceptions, including that of many policymakers, scholars in unrelated fields, and intelligent lay persons. To make matters worse, the media emphasizes bad news, while ignoring many positive long-term trends. Please join us for a discussion of psychological, physiological, cultural, and other social reasons for the persistence of pessimism in the age of growing abundance.
The video and audio can be seen or downloaded here.
Editor's note: This video is important to psychologists to show cultural trends and beliefs that may be perpetrated by media hype. This panel also highlights cognitive distortions, well being, and positive macro trends. If you can, watch the first presenter, Dr. Steven Pinker. If nothing else, you may feel a little better after watching the video.
Featuring Steven Pinker, Johnstone Family Professor of Psychology, Harvard University; with comments by Brink Lindsey, Vice President for Research, Cato Institute; and Charles Kenny, Senior Fellow, Center for Global Development
Originally posted November 19, 2014
Evidence from academic institutions and international organizations shows dramatic improvements in human well-being. These improvements are especially striking in the developing world. Unfortunately, there is often a wide gap between reality and public perceptions, including that of many policymakers, scholars in unrelated fields, and intelligent lay persons. To make matters worse, the media emphasizes bad news, while ignoring many positive long-term trends. Please join us for a discussion of psychological, physiological, cultural, and other social reasons for the persistence of pessimism in the age of growing abundance.
The video and audio can be seen or downloaded here.
Editor's note: This video is important to psychologists to show cultural trends and beliefs that may be perpetrated by media hype. This panel also highlights cognitive distortions, well being, and positive macro trends. If you can, watch the first presenter, Dr. Steven Pinker. If nothing else, you may feel a little better after watching the video.
Friday, December 12, 2014
Culture Of Psychological Science At Stake
By Tania Lombrozo
NPR Cosmos and Culture
Originally published November 18, 2014
In a video released today at Edge.org, psychologist Simone Schnall raises interesting questions about the role of replication in social psychology and about what counts as "admissible evidence" in science.
Schnall comes at the topic from recent experience: One of her studies was selected for a replication attempt by a registered replication project, and the replication failed to find the effect from her original study.
An occasional failure to replicate isn't too surprising or disruptive to the field — what makes Schnall's case somewhat unique is the discussion that ensued, which occurred largely on blogs and social media. And it got ugly.
The entire NPR article is here.
Dr. Schnall's Edge Video is here.
NPR Cosmos and Culture
Originally published November 18, 2014
In a video released today at Edge.org, psychologist Simone Schnall raises interesting questions about the role of replication in social psychology and about what counts as "admissible evidence" in science.
Schnall comes at the topic from recent experience: One of her studies was selected for a replication attempt by a registered replication project, and the replication failed to find the effect from her original study.
An occasional failure to replicate isn't too surprising or disruptive to the field — what makes Schnall's case somewhat unique is the discussion that ensued, which occurred largely on blogs and social media. And it got ugly.
The entire NPR article is here.
Dr. Schnall's Edge Video is here.
The Neuroscience of Moral Decision Making
By Molly Crockett
Edge Video Series
Originally published November 18, 2014
Here is an excerpt:
The neurochemistry adds an interesting layer to this bigger question of whether punishment is prosocially motivated, because in some ways it's a more objective way to look at it. Serotonin doesn't have a research agenda; it's just a chemical. We had all this data and we started thinking differently about the motivations of so-called altruistic punishment. That inspired a purely behavioral study where we give people the opportunity to punish those who behave unfairly towards them, but we do it in two conditions. One is a standard case where someone behaves unfairly to someone else and then that person can punish them. Everyone has full information, and the guy who's unfair knows that he's being punished.
Then we added another condition, where we give people the opportunity to punish in secret— hidden punishment. You can punish someone without them knowing that they've been punished. They still suffer a loss financially, but because we obscure the size of the stake, the guy who's being punished doesn't know he's being punished. The punisher gets the satisfaction of knowing that the bad guy is getting less money, but there's no social norm being enforced.
The entire video and transcript is here.
Edge Video Series
Originally published November 18, 2014
Here is an excerpt:
The neurochemistry adds an interesting layer to this bigger question of whether punishment is prosocially motivated, because in some ways it's a more objective way to look at it. Serotonin doesn't have a research agenda; it's just a chemical. We had all this data and we started thinking differently about the motivations of so-called altruistic punishment. That inspired a purely behavioral study where we give people the opportunity to punish those who behave unfairly towards them, but we do it in two conditions. One is a standard case where someone behaves unfairly to someone else and then that person can punish them. Everyone has full information, and the guy who's unfair knows that he's being punished.
Then we added another condition, where we give people the opportunity to punish in secret— hidden punishment. You can punish someone without them knowing that they've been punished. They still suffer a loss financially, but because we obscure the size of the stake, the guy who's being punished doesn't know he's being punished. The punisher gets the satisfaction of knowing that the bad guy is getting less money, but there's no social norm being enforced.
The entire video and transcript is here.
Thursday, December 11, 2014
Moral Evaluations Depend Upon Mindreading Moral Occurrent Beliefs
By Clayton R. Critcher, Erik G. Helzer, David Tannenbaum, and David A. Pizarro
Abstract
People evaluate the moral character of others not merely based on what they do, but why they do
it. Because an agent’s state of mind is not directly observable, people typically engage in
mindreading—attempts at inferring mental states—when forming moral evaluations. The present
paper identifies a heretofore unstudied focus of mindreading, moral occurrent beliefs—the
cognitions (e.g., thoughts, beliefs, principles, concerns, rules) accessible in an agent’s mind
while confronting a morally-relevant decision that could provide a moral justification for a
particular course of action. Whereas previous mindreading research has examined how people
“reason back” to make sense of why agents behaved as they did, we instead ask how mindread
occurrent beliefs (MOBs) constrain moral evaluations for an agent’s subsequent actions. Our
studies distinguish three accounts of how MOBs influence moral evaluations, show that people
rely on MOBs spontaneously (instead of merely when experimental measures draw attention to
them), and identify non-moral cues (e.g., whether the situation demands a quick decision) that
guide MOBs. Implications for theory of mind, moral psychology, and social cognition are
discussed.
The entire paper is here.
Abstract
People evaluate the moral character of others not merely based on what they do, but why they do
it. Because an agent’s state of mind is not directly observable, people typically engage in
mindreading—attempts at inferring mental states—when forming moral evaluations. The present
paper identifies a heretofore unstudied focus of mindreading, moral occurrent beliefs—the
cognitions (e.g., thoughts, beliefs, principles, concerns, rules) accessible in an agent’s mind
while confronting a morally-relevant decision that could provide a moral justification for a
particular course of action. Whereas previous mindreading research has examined how people
“reason back” to make sense of why agents behaved as they did, we instead ask how mindread
occurrent beliefs (MOBs) constrain moral evaluations for an agent’s subsequent actions. Our
studies distinguish three accounts of how MOBs influence moral evaluations, show that people
rely on MOBs spontaneously (instead of merely when experimental measures draw attention to
them), and identify non-moral cues (e.g., whether the situation demands a quick decision) that
guide MOBs. Implications for theory of mind, moral psychology, and social cognition are
discussed.
The entire paper is here.
Left Out in the Cold: Seven Reasons Not to Freeze Your Eggs
By Françoise Baylis
Impact Ethics
Originally posted October 16, 2014
Here is an excerpt:
These professional cautions are of no consequence to Facebook or Apple, however. Both of these companies have decided to include egg freezing in their employee benefit package. As an alternative, they could have decided to improve the health benefits offered to all employees. Or, to stay focused on the issue of reproduction, they could have included a full year of family leave in the benefit package. Instead, they chose to pay up to $20,000 for egg freezing. Now call me crazy, but I think this choice just might have to do with their corporate priorities – which include keeping talented workers in their 20s to early 30s in the workplace, not at home caring for babies.
(cut)
Second, contrary to popular belief, egg freezing does not set back a woman’s biological clock. While it is certainly true that eggs from a younger woman are more likely to generate a healthy embryo and a healthy pregnancy than eggs from an older woman, it very much matters that the body into which the embryos will be transferred is the body of an older woman. From a purely biological perspective, it is in the interest of women to have their children while they are younger.
The entire story is here.
Impact Ethics
Originally posted October 16, 2014
Here is an excerpt:
These professional cautions are of no consequence to Facebook or Apple, however. Both of these companies have decided to include egg freezing in their employee benefit package. As an alternative, they could have decided to improve the health benefits offered to all employees. Or, to stay focused on the issue of reproduction, they could have included a full year of family leave in the benefit package. Instead, they chose to pay up to $20,000 for egg freezing. Now call me crazy, but I think this choice just might have to do with their corporate priorities – which include keeping talented workers in their 20s to early 30s in the workplace, not at home caring for babies.
(cut)
Second, contrary to popular belief, egg freezing does not set back a woman’s biological clock. While it is certainly true that eggs from a younger woman are more likely to generate a healthy embryo and a healthy pregnancy than eggs from an older woman, it very much matters that the body into which the embryos will be transferred is the body of an older woman. From a purely biological perspective, it is in the interest of women to have their children while they are younger.
The entire story is here.
Wednesday, December 10, 2014
Business culture and dishonesty in the banking industry
By Alain Cohn, Ernst Fehr & Michel André Maréchal
Nature (2014) doi:10.1038/nature13977
Published online 19 November 2014
Abstract
Trust in others’ honesty is a key component of the long-term performance of firms, industries, and even whole countries. However, in recent years, numerous scandals involving fraud have undermined confidence in the financial industry. Contemporary commentators have attributed these scandals to the financial sector’s business culture, but no scientific evidence supports this claim. Here we show that employees of a large, international bank behave, on average, honestly in a control condition. However, when their professional identity as bank employees is rendered salient, a significant proportion of them become dishonest. This effect is specific to bank employees because control experiments with employees from other industries and with students show that they do not become more dishonest when their professional identity or bank-related items are rendered salient. Our results thus suggest that the prevailing business culture in the banking industry weakens and undermines the honesty norm, implying that measures to re-establish an honest culture are very important.
The article can be found here.
Nature (2014) doi:10.1038/nature13977
Published online 19 November 2014
Abstract
Trust in others’ honesty is a key component of the long-term performance of firms, industries, and even whole countries. However, in recent years, numerous scandals involving fraud have undermined confidence in the financial industry. Contemporary commentators have attributed these scandals to the financial sector’s business culture, but no scientific evidence supports this claim. Here we show that employees of a large, international bank behave, on average, honestly in a control condition. However, when their professional identity as bank employees is rendered salient, a significant proportion of them become dishonest. This effect is specific to bank employees because control experiments with employees from other industries and with students show that they do not become more dishonest when their professional identity or bank-related items are rendered salient. Our results thus suggest that the prevailing business culture in the banking industry weakens and undermines the honesty norm, implying that measures to re-establish an honest culture are very important.
The article can be found here.
"How Do You Change People's Minds About What Is Right And Wrong?"
By David Rand
Edge Video
Originally posted November 18, 2014
I'm a professor of psychology, economics and management at Yale. The thing that I'm interested in, and that I spend pretty much all of my time thinking about, is cooperation—situations where people have the chance to help others at a cost to themselves. The questions that I'm interested in are how do we explain the fact that, by and large, people are quite cooperative, and even more importantly, what can we do to get people to be more cooperative, to be more willing to make sacrifices for the collective good?
There's been a lot of work on cooperation in different fields, and certain basic themes have emerged, what you might call mechanisms for promoting cooperation: ways that you can structure interactions so that people learn to cooperate. In general, if you imagine that most people in a group are doing the cooperative thing, paying costs to help the group as a whole, but there's some subset that's decided "Oh, we don't feel like it; we're just going to look out for ourselves," the selfish people will be better off. Then, either through an evolutionary process or an imitation process, that selfish behavior will spread.
The entire video and transcript is here.
Edge Video
Originally posted November 18, 2014
I'm a professor of psychology, economics and management at Yale. The thing that I'm interested in, and that I spend pretty much all of my time thinking about, is cooperation—situations where people have the chance to help others at a cost to themselves. The questions that I'm interested in are how do we explain the fact that, by and large, people are quite cooperative, and even more importantly, what can we do to get people to be more cooperative, to be more willing to make sacrifices for the collective good?
There's been a lot of work on cooperation in different fields, and certain basic themes have emerged, what you might call mechanisms for promoting cooperation: ways that you can structure interactions so that people learn to cooperate. In general, if you imagine that most people in a group are doing the cooperative thing, paying costs to help the group as a whole, but there's some subset that's decided "Oh, we don't feel like it; we're just going to look out for ourselves," the selfish people will be better off. Then, either through an evolutionary process or an imitation process, that selfish behavior will spread.
The entire video and transcript is here.
Tuesday, December 9, 2014
APA Applauds Release of Senate Intelligence Committee Report Summary
American Psychological Association
Press Release
December 9, 2014
Says transparency will help protect human rights in the future
WASHINGTON — The American Psychological Association welcomed the release today of the Executive Summary of the Senate Select Committee on Intelligence report on the CIA’s detention and interrogation program during the George W. Bush administration. The document’s release recognizes American citizens’ right to know about the prior action of their government and is the best way to ensure that, going forward, the United States engages in national security programs that safeguard human rights and comply with international law.
The new details provided by the report regarding the extent and barbarity of torture techniques used by the CIA are sickening and morally reprehensible.
Two psychologists mentioned prominently in the report under pseudonyms, but identified in media reports as James Mitchell and Bruce Jessen, are not members of the American Psychological Association. Jessen was never a member; Mitchell resigned in 2006. Therefore, they are outside the reach of the association’s ethics adjudication process. Regardless of their membership status with APA, if the descriptions of their actions are accurate, they should be held fully accountable for violations of human rights and U.S. and international law.
Last month, the APA announced an independent review of the allegation by New York Times reporter and author James Risen that the association colluded with the Bush administration to support enhanced interrogation techniques that constituted torture. The review is being conducted by attorney David Hoffman of the law office Sidley Austin. Hoffman will be reviewing the released Senate Intelligence Committee report as a part of his APA review. Anyone with relevant information they wish to share with Hoffman is encouraged to communicate with him directly by email or phone at (312) 456-8468.
The American Psychological Association, in Washington, D.C., is the largest scientific and professional organization representing psychology in the United States. APA's membership includes nearly 130,000 researchers, educators, clinicians, consultants and students. Through its divisions in 54 subfields of psychology and affiliations with 60 state, territorial and Canadian provincial associations, APA works to advance the creation, communication and application of psychological knowledge to benefit society and improve people's lives.
Press Release
December 9, 2014
Says transparency will help protect human rights in the future
WASHINGTON — The American Psychological Association welcomed the release today of the Executive Summary of the Senate Select Committee on Intelligence report on the CIA’s detention and interrogation program during the George W. Bush administration. The document’s release recognizes American citizens’ right to know about the prior action of their government and is the best way to ensure that, going forward, the United States engages in national security programs that safeguard human rights and comply with international law.
The new details provided by the report regarding the extent and barbarity of torture techniques used by the CIA are sickening and morally reprehensible.
Two psychologists mentioned prominently in the report under pseudonyms, but identified in media reports as James Mitchell and Bruce Jessen, are not members of the American Psychological Association. Jessen was never a member; Mitchell resigned in 2006. Therefore, they are outside the reach of the association’s ethics adjudication process. Regardless of their membership status with APA, if the descriptions of their actions are accurate, they should be held fully accountable for violations of human rights and U.S. and international law.
Last month, the APA announced an independent review of the allegation by New York Times reporter and author James Risen that the association colluded with the Bush administration to support enhanced interrogation techniques that constituted torture. The review is being conducted by attorney David Hoffman of the law office Sidley Austin. Hoffman will be reviewing the released Senate Intelligence Committee report as a part of his APA review. Anyone with relevant information they wish to share with Hoffman is encouraged to communicate with him directly by email or phone at (312) 456-8468.
The American Psychological Association, in Washington, D.C., is the largest scientific and professional organization representing psychology in the United States. APA's membership includes nearly 130,000 researchers, educators, clinicians, consultants and students. Through its divisions in 54 subfields of psychology and affiliations with 60 state, territorial and Canadian provincial associations, APA works to advance the creation, communication and application of psychological knowledge to benefit society and improve people's lives.
Cognitive enhancement, legalising opium, and cognitive biases
By Joao Fabiano
Practical Ethics Blog
Originally published November 18, 2014
Suppose you want to enhance your cognition. A scientist hands you two drugs. Drug X has at least 19 controlled studies on the healthy individual showing it is effective, and while a handful of studies report a slight increase in blood pressure, another dozen conclude it is safe and non-addictive. Drug Y is also effective, but it increases mortality, has addiction potential and withdrawal symptoms. Which one do you choose? Great. Before you reach out for Drug X, the scientist warns you, “I should add, however, that Drug Y has been used by certain primitive communities for centuries, while Drug X has not.” Which one do you choose? Should this information have any bearing on your choice? I don’t think so. You probably conclude that primitive societies do all sort of crazy things and you would be better off with actual, double-blind, controlled studies.
The entire blog post is here.
Practical Ethics Blog
Originally published November 18, 2014
Suppose you want to enhance your cognition. A scientist hands you two drugs. Drug X has at least 19 controlled studies on the healthy individual showing it is effective, and while a handful of studies report a slight increase in blood pressure, another dozen conclude it is safe and non-addictive. Drug Y is also effective, but it increases mortality, has addiction potential and withdrawal symptoms. Which one do you choose? Great. Before you reach out for Drug X, the scientist warns you, “I should add, however, that Drug Y has been used by certain primitive communities for centuries, while Drug X has not.” Which one do you choose? Should this information have any bearing on your choice? I don’t think so. You probably conclude that primitive societies do all sort of crazy things and you would be better off with actual, double-blind, controlled studies.
The entire blog post is here.
What we say and what we do: The relationship between real and hypothetical moral choices
By Oriel FeldmanHall, Dean Mobbs, Davy Evans, Lucy Hiscox, Lauren Navrady, & Tim Dalgleish
Cognition. Jun 2012; 123(3): 434–441.
doi: 10.1016/j.cognition.2012.02.001
Abstract
Moral ideals are strongly ingrained within society and individuals alike, but actual moral choices are profoundly influenced by tangible rewards and consequences. Across two studies we show that real moral decisions can dramatically contradict moral choices made in hypothetical scenarios (Study 1). However, by systematically enhancing the contextual information available to subjects when addressing a hypothetical moral problem—thereby reducing the opportunity for mental simulation—we were able to incrementally bring subjects’ responses in line with their moral behaviour in real situations (Study 2). These results imply that previous work relying mainly on decontextualized hypothetical scenarios may not accurately reflect moral decisions in everyday life. The findings also shed light on contextual factors that can alter how moral decisions are made, such as the salience of a personal gain.
Highlights
Cognition. Jun 2012; 123(3): 434–441.
doi: 10.1016/j.cognition.2012.02.001
Abstract
Moral ideals are strongly ingrained within society and individuals alike, but actual moral choices are profoundly influenced by tangible rewards and consequences. Across two studies we show that real moral decisions can dramatically contradict moral choices made in hypothetical scenarios (Study 1). However, by systematically enhancing the contextual information available to subjects when addressing a hypothetical moral problem—thereby reducing the opportunity for mental simulation—we were able to incrementally bring subjects’ responses in line with their moral behaviour in real situations (Study 2). These results imply that previous work relying mainly on decontextualized hypothetical scenarios may not accurately reflect moral decisions in everyday life. The findings also shed light on contextual factors that can alter how moral decisions are made, such as the salience of a personal gain.
Highlights
-
We show people are unable to appropriately judge outcomes of moral behaviour.
- Moral beliefs have weaker impact when there is a presence of significant self-gain.
- People make highly self-serving choices in real moral situations.
- Real moral choices contradict responses to simple hypothetical moral probes.
- Enhancing context can cause hypothetical decisions to mirror real moral decisions.
Monday, December 8, 2014
Moral Saints
By Susan Wolf
The Journal of Philosophy
Vol 79, No. 8, 419-439
I don't know whether there are any moral saints. But if there are, I am glad that neither I nor those about whom I care most are among them. By moral saint I mean a person whose every action is as morally good as possible. Though I shall in a moment acknowledge the variety of types of person that might be thought to satisfy this description, it seems to me that none of these types serve as unequivocally compelling personal ideals. In other words, I believe that moral perfection, in sense of moral saintliness, does not constitute a model of personal well-being toward which it would be particularly rational or good or desirable for a human being to strive.
The entire article is here.
The Journal of Philosophy
Vol 79, No. 8, 419-439
I don't know whether there are any moral saints. But if there are, I am glad that neither I nor those about whom I care most are among them. By moral saint I mean a person whose every action is as morally good as possible. Though I shall in a moment acknowledge the variety of types of person that might be thought to satisfy this description, it seems to me that none of these types serve as unequivocally compelling personal ideals. In other words, I believe that moral perfection, in sense of moral saintliness, does not constitute a model of personal well-being toward which it would be particularly rational or good or desirable for a human being to strive.
The entire article is here.
Harm to others outweighs harm to self in moral decision making
By Molly J. Crockett, Zeb Kurth-Nelson, Jenifer Z. Siegel, Peter Dayan, and Raymond J. Dolan
PNAS 2014 ; published ahead of print November 17, 2014, doi:10.1073/pnas.1408988111
Abstract
Concern for the suffering of others is central to moral decision making. How humans evaluate others’ suffering, relative to their own suffering, is unknown. We investigated this question by inviting subjects to trade off profits for themselves against pain experienced either by themselves or an anonymous other person. Subjects made choices between different amounts of money and different numbers of painful electric shocks. We independently varied the recipient of the shocks (self vs. other) and whether the choice involved paying to decrease pain or profiting by increasing pain. We built computational models to quantify the relative values subjects ascribed to pain for themselves and others in this setting. In two studies we show that most people valued others’ pain more than their own pain. This was evident in a willingness to pay more to reduce others’ pain than their own and a requirement for more compensation to increase others’ pain relative to their own. This ‟hyperaltruistic” valuation of others’ pain was linked to slower responding when making decisions that affected others, consistent with an engagement of deliberative processes in moral decision making. Subclinical psychopathic traits correlated negatively with aversion to pain for both self and others, in line with reports of aversive processing deficits in psychopathy. Our results provide evidence for a circumstance in which people care more for others than themselves. Determining the precise boundaries of this surprisingly prosocial disposition has implications for understanding human moral decision making and its disturbance in antisocial behavior.
Significance
Concern for the welfare of others is a key component of moral decision making and is disturbed in antisocial and criminal behavior. However, little is known about how people evaluate the costs of others’ suffering. Past studies have examined people’s judgments in hypothetical scenarios, but there is evidence that hypothetical judgments cannot accurately predict actual behavior. Here we addressed this issue by measuring how much money people will sacrifice to reduce the number of painful electric shocks delivered to either themselves or an anonymous stranger. Surprisingly, most people sacrifice more money to reduce a stranger’s pain than their own pain. This finding may help us better understand how people resolve moral dilemmas that commonly arise in medical, legal, and political decision making.
The entire article is here.
PNAS 2014 ; published ahead of print November 17, 2014, doi:10.1073/pnas.1408988111
Abstract
Concern for the suffering of others is central to moral decision making. How humans evaluate others’ suffering, relative to their own suffering, is unknown. We investigated this question by inviting subjects to trade off profits for themselves against pain experienced either by themselves or an anonymous other person. Subjects made choices between different amounts of money and different numbers of painful electric shocks. We independently varied the recipient of the shocks (self vs. other) and whether the choice involved paying to decrease pain or profiting by increasing pain. We built computational models to quantify the relative values subjects ascribed to pain for themselves and others in this setting. In two studies we show that most people valued others’ pain more than their own pain. This was evident in a willingness to pay more to reduce others’ pain than their own and a requirement for more compensation to increase others’ pain relative to their own. This ‟hyperaltruistic” valuation of others’ pain was linked to slower responding when making decisions that affected others, consistent with an engagement of deliberative processes in moral decision making. Subclinical psychopathic traits correlated negatively with aversion to pain for both self and others, in line with reports of aversive processing deficits in psychopathy. Our results provide evidence for a circumstance in which people care more for others than themselves. Determining the precise boundaries of this surprisingly prosocial disposition has implications for understanding human moral decision making and its disturbance in antisocial behavior.
Significance
Concern for the welfare of others is a key component of moral decision making and is disturbed in antisocial and criminal behavior. However, little is known about how people evaluate the costs of others’ suffering. Past studies have examined people’s judgments in hypothetical scenarios, but there is evidence that hypothetical judgments cannot accurately predict actual behavior. Here we addressed this issue by measuring how much money people will sacrifice to reduce the number of painful electric shocks delivered to either themselves or an anonymous stranger. Surprisingly, most people sacrifice more money to reduce a stranger’s pain than their own pain. This finding may help us better understand how people resolve moral dilemmas that commonly arise in medical, legal, and political decision making.
The entire article is here.
Sunday, December 7, 2014
The self is moral
We tend to think that our memories determine our identity, but it’s moral character that really makes us who we are
By Nina Strohminger
Aeon Magazine
Originally published November 17, 2014
Here is an excerpt:
Recent studies by the philosopher Shaun Nichols at the University of Arizona and myself support the view that the identity-conferring part of a person is his moral capacities. One of our experiments pays homage to Locke’s thought experiment by asking subjects which of a slew of traits a person would most likely take with him if his soul moved to a new body. Moral traits were considered more likely to survive a body swap than any other type of trait, mental or physical. Interestingly, certain types of memories – those involving people – were deemed fairly likely to survive the trip. But generic episodic memories, such as one’s commute to work, were not. People are not so much concerned with memory as with memory’s ability to connect us to others and our capacity for social action.
(cut)
Why does our identity detector place so much emphasis on moral capacities? These aren’t our most distinctive features. Our faces, our fingertips, our quirks, our autobiographies: any of these would be a more reliable way of telling who’s who. Somewhat paradoxically, identity has less to do with what makes us different from other people than with our shared humanity.
Saturday, December 6, 2014
Denying Problems When We Don’t Like the Solutions
By Duke University
Press Release
Originally published November 6, 2014
Here is an excerpt:
A new study from Duke University finds that people will evaluate scientific evidence based on whether they view its policy implications as politically desirable. If they don't, then they tend to deny the problem even exists.
“Logically, the proposed solution to a problem, such as an increase in government regulation or an extension of the free market, should not influence one’s belief in the problem. However, we find it does,” said co-author Troy Campbell, a Ph.D. candidate at Duke's Fuqua School of Business. “The cure can be more immediately threatening than the problem.”
The study, "Solution Aversion: On the Relation Between Ideology and Motivated Disbelief," appears in the November issue of the Journal of Personality and Social Psychology (viewable here).
The entire article is here.
Press Release
Originally published November 6, 2014
Here is an excerpt:
A new study from Duke University finds that people will evaluate scientific evidence based on whether they view its policy implications as politically desirable. If they don't, then they tend to deny the problem even exists.
“Logically, the proposed solution to a problem, such as an increase in government regulation or an extension of the free market, should not influence one’s belief in the problem. However, we find it does,” said co-author Troy Campbell, a Ph.D. candidate at Duke's Fuqua School of Business. “The cure can be more immediately threatening than the problem.”
The study, "Solution Aversion: On the Relation Between Ideology and Motivated Disbelief," appears in the November issue of the Journal of Personality and Social Psychology (viewable here).
The entire article is here.
Friday, December 5, 2014
Psychologist in "Kids for Cash" Scandal Surrenders License
By Roger DuPuis
The Times Leader
Originally published November 12, 2014
The psychologist brother-in-law of disgraced former Luzerne County judge Michael T. Conahan has given up his license for “gross incompetence, negligence or misconduct” carrying out his past work evaluating juveniles in the county court system, state officials said Wednesday.
The Pennsylvania Board of Psychology said Frank James Vita, of Dorrance Township, “grossly deviated from ethical and professional standards” after reviewing 76 of the cases he had handled.
Vita once was linked to the county’s “Kids for Cash” judicial scandal in a civil suit that alleged he conspired with Conahan and fellow former judge Mark Ciavarella to perform evaluations that led to juveniles being incarcerated in facilities in which the judges had a financial interest.
The entire article is here.
The Times Leader
Originally published November 12, 2014
The psychologist brother-in-law of disgraced former Luzerne County judge Michael T. Conahan has given up his license for “gross incompetence, negligence or misconduct” carrying out his past work evaluating juveniles in the county court system, state officials said Wednesday.
The Pennsylvania Board of Psychology said Frank James Vita, of Dorrance Township, “grossly deviated from ethical and professional standards” after reviewing 76 of the cases he had handled.
Vita once was linked to the county’s “Kids for Cash” judicial scandal in a civil suit that alleged he conspired with Conahan and fellow former judge Mark Ciavarella to perform evaluations that led to juveniles being incarcerated in facilities in which the judges had a financial interest.
The entire article is here.
Moral Injury Is The 'Signature Wound' Of Today's Veterans
Interview with David Wood
NPR
Originally posted November 11, 2014
Here is an excerpt:
On the best therapy for treating this "bruise on the soul"
The biggest thing that [the veterans] told me was that they're carrying around this horrible idea that they are bad people because they've done something bad and they can't ever tell anybody about it — or they don't dare tell anybody about it — and may not even be able to admit it to themselves.
One of the most healing things they have found is to stand in a group of fellow veterans and say, "This is what happened. This is what I saw. This is what I did," and to have their fellow veterans nod and say, "I hear you. I hear you." And just accept it, without saying, "Well, you couldn't help it," or, "You're really a good person at heart."
But just hearing it and accepting it — and not being blamed or castigated for whatever it was that you're feeling bad about. It's that validating kind of listening that is so important to all the therapies that I've seen.
The entire article is here.
NPR
Originally posted November 11, 2014
Here is an excerpt:
On the best therapy for treating this "bruise on the soul"
The biggest thing that [the veterans] told me was that they're carrying around this horrible idea that they are bad people because they've done something bad and they can't ever tell anybody about it — or they don't dare tell anybody about it — and may not even be able to admit it to themselves.
One of the most healing things they have found is to stand in a group of fellow veterans and say, "This is what happened. This is what I saw. This is what I did," and to have their fellow veterans nod and say, "I hear you. I hear you." And just accept it, without saying, "Well, you couldn't help it," or, "You're really a good person at heart."
But just hearing it and accepting it — and not being blamed or castigated for whatever it was that you're feeling bad about. It's that validating kind of listening that is so important to all the therapies that I've seen.
The entire article is here.
Thursday, December 4, 2014
Why I Am Not a Utilitarian
By Julian Savulescu
Practical Ethics Blog
Originally posted November 15 2014
Utilitarianism is a widely despised, denigrated and misunderstood moral theory.
Kant himself described it as a morality fit only for English shopkeepers. (Kant had much loftier aspirations of entering his own “noumenal” world.)
The adjective “utilitarian” now has negative connotations like “Machiavellian”. It is associated with “the end justifies the means” or using people as a mere means or failing to respect human dignity, etc.
For example, consider the following negative uses of “utilitarian.”
“Don’t be so utilitarian.”
“That is a really utilitarian way to think about it.”
To say someone is behaving in a utilitarian manner is to say something derogatory about their behaviour.
The entire article is here.
Practical Ethics Blog
Originally posted November 15 2014
Utilitarianism is a widely despised, denigrated and misunderstood moral theory.
Kant himself described it as a morality fit only for English shopkeepers. (Kant had much loftier aspirations of entering his own “noumenal” world.)
The adjective “utilitarian” now has negative connotations like “Machiavellian”. It is associated with “the end justifies the means” or using people as a mere means or failing to respect human dignity, etc.
For example, consider the following negative uses of “utilitarian.”
“Don’t be so utilitarian.”
“That is a really utilitarian way to think about it.”
To say someone is behaving in a utilitarian manner is to say something derogatory about their behaviour.
The entire article is here.
‘Utilitarian’ judgments in sacrificial moral dilemmas do not reflect impartial concern for the greater good
By G. Kahane, J. Everett, Brian Earp, Miguel Farias, and J. Savulescu
Cognition, Vol 134, Jan 2015, pp 193-209.
Highlights
• ‘Utilitarian’ judgments in moral dilemmas were associated with egocentric attitudes and less identification with humanity.
• They were also associated with lenient views about clear moral transgressions.
• ‘Utilitarian’ judgments were not associated with views expressing impartial altruist concern for others.
• This lack of association remained even when antisocial tendencies were controlled for.
• So-called ‘utilitarian’ judgments do not express impartial concern for the greater good.
Abstract
A growing body of research has focused on so-called ‘utilitarian’ judgments in moral dilemmas in which participants have to choose whether to sacrifice one person in order to save the lives of a greater number. However, the relation between such ‘utilitarian’ judgments and genuine utilitarian impartial concern for the greater good remains unclear. Across four studies, we investigated the relationship between ‘utilitarian’ judgment in such sacrificial dilemmas and a range of traits, attitudes, judgments and behaviors that either reflect or reject an impartial concern for the greater good of all. In Study 1, we found that rates of ‘utilitarian’ judgment were associated with a broadly immoral outlook concerning clear ethical transgressions in a business context, as well as with sub-clinical psychopathy. In Study 2, we found that ‘utilitarian’ judgment was associated with greater endorsement of rational egoism, less donation of money to a charity, and less identification with the whole of humanity, a core feature of classical utilitarianism. In Studies 3 and 4, we found no association between ‘utilitarian’ judgments in sacrificial dilemmas and characteristic utilitarian judgments relating to assistance to distant people in need, self-sacrifice and impartiality, even when the utilitarian justification for these judgments was made explicit and unequivocal. This lack of association remained even when we controlled for the antisocial element in ‘utilitarian’ judgment. Taken together, these results suggest that there is very little relation between sacrificial judgments in the hypothetical dilemmas that dominate current research, and a genuine utilitarian approach to ethics.
The entire article is here.
Cognition, Vol 134, Jan 2015, pp 193-209.
Highlights
• ‘Utilitarian’ judgments in moral dilemmas were associated with egocentric attitudes and less identification with humanity.
• They were also associated with lenient views about clear moral transgressions.
• ‘Utilitarian’ judgments were not associated with views expressing impartial altruist concern for others.
• This lack of association remained even when antisocial tendencies were controlled for.
• So-called ‘utilitarian’ judgments do not express impartial concern for the greater good.
Abstract
A growing body of research has focused on so-called ‘utilitarian’ judgments in moral dilemmas in which participants have to choose whether to sacrifice one person in order to save the lives of a greater number. However, the relation between such ‘utilitarian’ judgments and genuine utilitarian impartial concern for the greater good remains unclear. Across four studies, we investigated the relationship between ‘utilitarian’ judgment in such sacrificial dilemmas and a range of traits, attitudes, judgments and behaviors that either reflect or reject an impartial concern for the greater good of all. In Study 1, we found that rates of ‘utilitarian’ judgment were associated with a broadly immoral outlook concerning clear ethical transgressions in a business context, as well as with sub-clinical psychopathy. In Study 2, we found that ‘utilitarian’ judgment was associated with greater endorsement of rational egoism, less donation of money to a charity, and less identification with the whole of humanity, a core feature of classical utilitarianism. In Studies 3 and 4, we found no association between ‘utilitarian’ judgments in sacrificial dilemmas and characteristic utilitarian judgments relating to assistance to distant people in need, self-sacrifice and impartiality, even when the utilitarian justification for these judgments was made explicit and unequivocal. This lack of association remained even when we controlled for the antisocial element in ‘utilitarian’ judgment. Taken together, these results suggest that there is very little relation between sacrificial judgments in the hypothetical dilemmas that dominate current research, and a genuine utilitarian approach to ethics.
The entire article is here.
Wednesday, December 3, 2014
Psychologists to Review Role in Detainee Interrogations
By James Risen
The New York Times
November 13, 2014
Here is an excerpt:
For years, questions about the role of American psychologists and behavioral scientists in the development and implementation of the Bush-era interrogation program have been raised by human rights advocates as well as by critics within the psychological profession itself. Psychologists were involved in developing the enhanced interrogation techniques used on terrorism suspects by the Central Intelligence Agency. Later, a number of psychologists, in the military and in the intelligence community, were involved in carrying out and monitoring interrogations.
The entire article is here.
The New York Times
November 13, 2014
Here is an excerpt:
For years, questions about the role of American psychologists and behavioral scientists in the development and implementation of the Bush-era interrogation program have been raised by human rights advocates as well as by critics within the psychological profession itself. Psychologists were involved in developing the enhanced interrogation techniques used on terrorism suspects by the Central Intelligence Agency. Later, a number of psychologists, in the military and in the intelligence community, were involved in carrying out and monitoring interrogations.
The entire article is here.
Moral Psychology as Accountability
By Brendan Dill and Stephen Darwall
[In Justin D’Arms & Daniel Jacobson (eds.), Moral Psychology and Human Agency: Philosophical Essays on the Science of Ethics (pp. 40-83). Oxford University Press. Pre-publication draft. For citation or quotation, please refer to the published volume.
Introduction
When moral psychology exploded a decade ago with groundbreaking research, there was considerable excitement about the potential fruits of collaboration between moral philosophers and moral psychologists. However, this enthusiasm soon gave way to controversy about whether either field was, or even could be, relevant to the other (e.g., Greene 2007; Berker 2009). After all, it seems at first glance that the primary question researched by moral psychologists—how people form judgments about what is morally right and wrong—is independent from the parallel question investigated by moral philosophers—what is in fact morally right and wrong, and why.
Once we transcend the narrow bounds of quandary ethics and “trolleyology,” however, a broader look at the fields of moral psychology and moral philosophy reveals several common interests. Moral philosophers strive not only to determine what actions are morally right and wrong, but also to understand our moral concepts, practices, and psychology. They ask what it means to be morally right, wrong, or obligatory: what distinguishes moral principles from other norms of action, such as those of instrumental rationality, prudence, excellence, or etiquette (Anscombe 1958; Williams 1985; Gibbard 1990; Annas 1995)? Moral psychologists pursue this very question in research on the distinction between moral and conventional rules (Turiel 1983; Nichols 2002; Kelly et al. 2007; Royzman, Leeman, and Baron 2009) and in attempts to define the moral domain (e.g., Haidt and Kesebir 2010).
The entire paper is here.
[In Justin D’Arms & Daniel Jacobson (eds.), Moral Psychology and Human Agency: Philosophical Essays on the Science of Ethics (pp. 40-83). Oxford University Press. Pre-publication draft. For citation or quotation, please refer to the published volume.
Introduction
When moral psychology exploded a decade ago with groundbreaking research, there was considerable excitement about the potential fruits of collaboration between moral philosophers and moral psychologists. However, this enthusiasm soon gave way to controversy about whether either field was, or even could be, relevant to the other (e.g., Greene 2007; Berker 2009). After all, it seems at first glance that the primary question researched by moral psychologists—how people form judgments about what is morally right and wrong—is independent from the parallel question investigated by moral philosophers—what is in fact morally right and wrong, and why.
Once we transcend the narrow bounds of quandary ethics and “trolleyology,” however, a broader look at the fields of moral psychology and moral philosophy reveals several common interests. Moral philosophers strive not only to determine what actions are morally right and wrong, but also to understand our moral concepts, practices, and psychology. They ask what it means to be morally right, wrong, or obligatory: what distinguishes moral principles from other norms of action, such as those of instrumental rationality, prudence, excellence, or etiquette (Anscombe 1958; Williams 1985; Gibbard 1990; Annas 1995)? Moral psychologists pursue this very question in research on the distinction between moral and conventional rules (Turiel 1983; Nichols 2002; Kelly et al. 2007; Royzman, Leeman, and Baron 2009) and in attempts to define the moral domain (e.g., Haidt and Kesebir 2010).
The entire paper is here.
Tuesday, December 2, 2014
Why the Right to Die Movement Needed Brittany Maynard
By Keisha Ray
Bioethics.net
Originally published November 12, 2014
Here is an excerpt:
Choice
In life many choices are not our own, but how we live our life is our choice. Maynard did not choose to have cancer invade her brain, but she did choose how to live her life after her diagnoses. After her diagnosis, Maynard remained doing the activities that had always made her life fulfilling—traveling, volunteering, and spending time with family and friends. Maynard made an informed choice to not let brain cancer kill her. She made the decision to choose how her life ends. And that’s one of the major aims of the right to die movement—that terminally ill patients ought to be able to choose how long they live with their disease and whether their disease will be the cause of their death. Disease takes away so many choices and puts people at the mercy of doctors, nurses, and most importantly it puts people at the mercy of their failing body. The right to die movements aims to take some of that power back.
The entire article is here.
Bioethics.net
Originally published November 12, 2014
Here is an excerpt:
Choice
In life many choices are not our own, but how we live our life is our choice. Maynard did not choose to have cancer invade her brain, but she did choose how to live her life after her diagnoses. After her diagnosis, Maynard remained doing the activities that had always made her life fulfilling—traveling, volunteering, and spending time with family and friends. Maynard made an informed choice to not let brain cancer kill her. She made the decision to choose how her life ends. And that’s one of the major aims of the right to die movement—that terminally ill patients ought to be able to choose how long they live with their disease and whether their disease will be the cause of their death. Disease takes away so many choices and puts people at the mercy of doctors, nurses, and most importantly it puts people at the mercy of their failing body. The right to die movements aims to take some of that power back.
The entire article is here.
Attributions to God and Satan About Life-Altering Events.
Ray, Shanna D.; Lockman, Jennifer D.; Jones, Emily J.; Kelly, Melanie H.
Psychology of Religion and Spirituality, Sep 22 , 2014, No Pagination Specified. http://dx.doi.org/10.1037/a0037884
Abstract
When faced with negative life events, people often interpret the events by attributing them to the actions of God or Satan (Lupfer, Tolliver, & Jackson, 1996; Ritzema, 1979). To explore these attributions, we conducted a mixed-method study of Christians who were college freshmen. Participants read vignettes depicting a negative life event that had a beginning and an end that was systematically varied. Participants assigned a larger role to God in vignettes where an initially negative event (e.g., relationship breakup) led to a positive long-term outcome (e.g., meeting someone better) than with a negative (e.g., depression and loneliness) or unspecified long-term outcome. Participants attributed a lesser role to Satan when there was positive outcome rather than negative or unspecified outcome. Participants also provided their own narratives, recounting personal experiences that they attributed to the actions of God or Satan. Participant-supplied narratives often demonstrated “theories” about the actions of God, depicting God as being involved in negative events as a rescuer, comforter, or one who brings positive out of the negative. Satan-related narratives were often lacking in detail or a clear theory of how Satan worked. Participants who did provide this information depicted Satan as acting primarily through influencing one’s thoughts and/or using other people to encourage one’s negative behavior.
The entire article is here.
Psychology of Religion and Spirituality, Sep 22 , 2014, No Pagination Specified. http://dx.doi.org/10.1037/a0037884
Abstract
When faced with negative life events, people often interpret the events by attributing them to the actions of God or Satan (Lupfer, Tolliver, & Jackson, 1996; Ritzema, 1979). To explore these attributions, we conducted a mixed-method study of Christians who were college freshmen. Participants read vignettes depicting a negative life event that had a beginning and an end that was systematically varied. Participants assigned a larger role to God in vignettes where an initially negative event (e.g., relationship breakup) led to a positive long-term outcome (e.g., meeting someone better) than with a negative (e.g., depression and loneliness) or unspecified long-term outcome. Participants attributed a lesser role to Satan when there was positive outcome rather than negative or unspecified outcome. Participants also provided their own narratives, recounting personal experiences that they attributed to the actions of God or Satan. Participant-supplied narratives often demonstrated “theories” about the actions of God, depicting God as being involved in negative events as a rescuer, comforter, or one who brings positive out of the negative. Satan-related narratives were often lacking in detail or a clear theory of how Satan worked. Participants who did provide this information depicted Satan as acting primarily through influencing one’s thoughts and/or using other people to encourage one’s negative behavior.
The entire article is here.
Monday, December 1, 2014
Legal Theory Lexicon: Justice
By Lawrence Solum
Legal Theory Blog
Originally published November 9, 2014
Introduction
The connection between law and justice is a deep one. We have "Halls of Justice," "Justices of the Supreme Court," and "the administration of justice." We know that "justice" is one of the central concepts of legal theory, but the concept of justice is also vague and ambiguous. This post provides an introductory roadmap to the the idea of justice. Subsequent entries in the Legal Theory Lexicon will cover more particular aspects of this topic such as "distributive justice." As always, this post is aimed at law students (especially first-year law students) with an interest in legal theory.
The entire blog post is here.
Legal Theory Blog
Originally published November 9, 2014
Introduction
The connection between law and justice is a deep one. We have "Halls of Justice," "Justices of the Supreme Court," and "the administration of justice." We know that "justice" is one of the central concepts of legal theory, but the concept of justice is also vague and ambiguous. This post provides an introductory roadmap to the the idea of justice. Subsequent entries in the Legal Theory Lexicon will cover more particular aspects of this topic such as "distributive justice." As always, this post is aimed at law students (especially first-year law students) with an interest in legal theory.
The entire blog post is here.
Blame as Harm
By Patrick Mayer
Academia.edu
I. Introduction
Among philosophers who work on the topic of moral responsibility there is widespread agreement with the claim that when we debate over the nature and existence of moral responsibility we are not talking about punishment. To say that someone is morally responsible for a bad action is not to say that she ought to be punished for it, nor does saying that moral responsibility is a fiction imply that you think punishment is illegitimate. Moral responsibility is about praiseworthiness and blameworthiness. You are morally responsible for some action iff it is either appropriate to praise you, appropriate to blame or would have been so had the action been morally significant in one way or another.
In this paper ‘Incompatibilism’ will be the name of the view that moral responsibility is incompatible with determinism. So according to Incompatibilism it is never appropriate to praise or blame someone. Why? Different incompatibilists will give you different answers. One might answer by saying that it is a conceptual or linguistic fact that blameworthiness is incompatible with determinism. An example would be saying that the definition of ‘blameworthy’ or the concept of blameworthiness contains within it a claim that for an agent to be blameworthy for X it must have been possible for the agent to do something other than X. On this way of thinking about incompatibilism if someone believes that determinism is true and they believe that someone is blameworthy then they accept contradictory claims and are therefore irrational.
Another way to answer the question is to say not that believing someone blameworthy would be inconsistent with a belief in determinism but to say that to blame someone would be unfair if determinism were true. This second way to answer I will call ‘Fairness Incompatibilism.’ There are advantages to adopting Fairness Incompatibilism. One, and probably the historically most important reason, is that by adopting Fairness Incompatibilism one can answer a criticism made by P.F. Strawson against incompatibilism. Strawson claims that the practice of reacting emotionally to people, a practice many have treated as equivalent to blaming and praising, stands in no need of an external metaphysical justification. This is meant to rule out the demand, made by incompatibilists, that morally responsible agents have a form of agency that implies indeterminism. But considerations of fairness are internal to the practice of reacting emotionally to people, and so if the case for incompatibilism is made by appeal to the concept of fairness then whether Strawson’s claim about the immunity of our practice from purely metaphysical considerations, incompatibilism can still go through. Another motivation for accepting Fairness Incompatibilism is that many have the intuition that if determinism is true then when we blame people we are doing something wrong to them, treating them in a way they do not deserve.
The entire article is here.
Academia.edu
I. Introduction
Among philosophers who work on the topic of moral responsibility there is widespread agreement with the claim that when we debate over the nature and existence of moral responsibility we are not talking about punishment. To say that someone is morally responsible for a bad action is not to say that she ought to be punished for it, nor does saying that moral responsibility is a fiction imply that you think punishment is illegitimate. Moral responsibility is about praiseworthiness and blameworthiness. You are morally responsible for some action iff it is either appropriate to praise you, appropriate to blame or would have been so had the action been morally significant in one way or another.
In this paper ‘Incompatibilism’ will be the name of the view that moral responsibility is incompatible with determinism. So according to Incompatibilism it is never appropriate to praise or blame someone. Why? Different incompatibilists will give you different answers. One might answer by saying that it is a conceptual or linguistic fact that blameworthiness is incompatible with determinism. An example would be saying that the definition of ‘blameworthy’ or the concept of blameworthiness contains within it a claim that for an agent to be blameworthy for X it must have been possible for the agent to do something other than X. On this way of thinking about incompatibilism if someone believes that determinism is true and they believe that someone is blameworthy then they accept contradictory claims and are therefore irrational.
Another way to answer the question is to say not that believing someone blameworthy would be inconsistent with a belief in determinism but to say that to blame someone would be unfair if determinism were true. This second way to answer I will call ‘Fairness Incompatibilism.’ There are advantages to adopting Fairness Incompatibilism. One, and probably the historically most important reason, is that by adopting Fairness Incompatibilism one can answer a criticism made by P.F. Strawson against incompatibilism. Strawson claims that the practice of reacting emotionally to people, a practice many have treated as equivalent to blaming and praising, stands in no need of an external metaphysical justification. This is meant to rule out the demand, made by incompatibilists, that morally responsible agents have a form of agency that implies indeterminism. But considerations of fairness are internal to the practice of reacting emotionally to people, and so if the case for incompatibilism is made by appeal to the concept of fairness then whether Strawson’s claim about the immunity of our practice from purely metaphysical considerations, incompatibilism can still go through. Another motivation for accepting Fairness Incompatibilism is that many have the intuition that if determinism is true then when we blame people we are doing something wrong to them, treating them in a way they do not deserve.
The entire article is here.
Subscribe to:
Posts (Atom)