Robert Wright interviews Paul Bloom on his book "Against Empathy."
The Wright Show
Originally published December 6, 2016
Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care
Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Saturday, December 31, 2016
Friday, December 30, 2016
Programmers are having a huge discussion about the unethical and illegal things they’ve been asked to do
Julie Bort
Business Insider
Originally published November 20, 2016
Here is an excerpt:
He pointed out that "there are hints" that developers will increasingly face some real heat in the years to come. He cited Volkswagen America's CEO, Michael Horn, who at first blamed software engineers for the company's emissions cheating scandal during a Congressional hearing, claimed the coders had acted on their own "for whatever reason." Horn later resigned after US prosecutors accused the company of making this decision at the highest levels and then trying to cover it up.
But Martin pointed out, "The weird thing is, it was software developers who wrote that code. It was us. Some programmers wrote cheating code. Do you think they knew? I think they probably knew."
Martin finished with a fire-and-brimstone call to action in which he warned that one day, some software developer will do something that will cause a disaster that kills tens of thousands of people.
But Sourour points out that it's not just about accidentally killing people or deliberately polluting the air. Software has already been used by Wall Street firms to manipulate stock quotes.
The article is here.
Business Insider
Originally published November 20, 2016
Here is an excerpt:
He pointed out that "there are hints" that developers will increasingly face some real heat in the years to come. He cited Volkswagen America's CEO, Michael Horn, who at first blamed software engineers for the company's emissions cheating scandal during a Congressional hearing, claimed the coders had acted on their own "for whatever reason." Horn later resigned after US prosecutors accused the company of making this decision at the highest levels and then trying to cover it up.
But Martin pointed out, "The weird thing is, it was software developers who wrote that code. It was us. Some programmers wrote cheating code. Do you think they knew? I think they probably knew."
Martin finished with a fire-and-brimstone call to action in which he warned that one day, some software developer will do something that will cause a disaster that kills tens of thousands of people.
But Sourour points out that it's not just about accidentally killing people or deliberately polluting the air. Software has already been used by Wall Street firms to manipulate stock quotes.
The article is here.
The ethics of algorithms: Mapping the debate
Brent Daniel Mittelstadt, Patrick Allo, Mariarosaria Taddeo, Sandra Wachter, Luciano Floridi
Big Data and Society
DOI: 10.1177/2053951716679679, Dec 2016
Abstract
In information societies, operations, decisions and choices previously left to humans are increasingly delegated to algorithms, which may advise, if not decide, about how data should be interpreted and what actions should be taken as a result. More and more often, algorithms mediate social processes, business transactions, governmental decisions, and how we perceive, understand, and interact among ourselves and with the environment. Gaps between the design and operation of algorithms and our understanding of their ethical implications can have severe consequences affecting individuals as well as groups and whole societies. This paper makes three contributions to clarify the ethical importance of algorithmic mediation. It provides a prescriptive map to organise the debate. It reviews the current discussion of ethical aspects of algorithms. And it assesses the available literature in order to identify areas requiring further work to develop the ethics of algorithms.
The article is here.
Big Data and Society
DOI: 10.1177/2053951716679679, Dec 2016
Abstract
In information societies, operations, decisions and choices previously left to humans are increasingly delegated to algorithms, which may advise, if not decide, about how data should be interpreted and what actions should be taken as a result. More and more often, algorithms mediate social processes, business transactions, governmental decisions, and how we perceive, understand, and interact among ourselves and with the environment. Gaps between the design and operation of algorithms and our understanding of their ethical implications can have severe consequences affecting individuals as well as groups and whole societies. This paper makes three contributions to clarify the ethical importance of algorithmic mediation. It provides a prescriptive map to organise the debate. It reviews the current discussion of ethical aspects of algorithms. And it assesses the available literature in order to identify areas requiring further work to develop the ethics of algorithms.
The article is here.
Thursday, December 29, 2016
The Tragedy of Biomedical Moral Enhancement
Stefan Schlag
Neuroethics (2016). pp 1-13.
doi:10.1007/s12152-016-9284-5
Abstract
In Unfit for the Future, Ingmar Persson and Julian Savulescu present a challenging argument in favour of biomedical moral enhancement. In light of the existential threats of climate change, insufficient moral capacities of the human species seem to require a cautiously shaped programme of biomedical moral enhancement. The story of the tragedy of the commons creates the impression that climate catastrophe is unavoidable and consequently gives strength to the argument. The present paper analyses to what extent a policy in favour of biomedical moral enhancement can thereby be justified and puts special emphasis on the political context. By reconstructing the theoretical assumptions of the argument and by taking them seriously, it is revealed that the argument is self-defeating. The tragedy of the commons may make moral enhancement appear necessary, but when it comes to its implementation, a second-order collective action-problem emerges and impedes the execution of the idea. The paper examines several modifications of the argument and shows how it can be based on easier enforceability of BME. While this implies enforcement, it is not an obstacle for the justification of BME. Rather, enforceability might be the decisive advantage of BME over other means. To take account of the global character of climate change, the paper closes with an inquiry of possible justifications of enforced BME on a global level. The upshot of the entire line of argumentation is that Unfit for the Future cannot justify BME because it ignores the nature of the problem of climate protection and political prerequisites of any solution.
The article is here.
Neuroethics (2016). pp 1-13.
doi:10.1007/s12152-016-9284-5
Abstract
In Unfit for the Future, Ingmar Persson and Julian Savulescu present a challenging argument in favour of biomedical moral enhancement. In light of the existential threats of climate change, insufficient moral capacities of the human species seem to require a cautiously shaped programme of biomedical moral enhancement. The story of the tragedy of the commons creates the impression that climate catastrophe is unavoidable and consequently gives strength to the argument. The present paper analyses to what extent a policy in favour of biomedical moral enhancement can thereby be justified and puts special emphasis on the political context. By reconstructing the theoretical assumptions of the argument and by taking them seriously, it is revealed that the argument is self-defeating. The tragedy of the commons may make moral enhancement appear necessary, but when it comes to its implementation, a second-order collective action-problem emerges and impedes the execution of the idea. The paper examines several modifications of the argument and shows how it can be based on easier enforceability of BME. While this implies enforcement, it is not an obstacle for the justification of BME. Rather, enforceability might be the decisive advantage of BME over other means. To take account of the global character of climate change, the paper closes with an inquiry of possible justifications of enforced BME on a global level. The upshot of the entire line of argumentation is that Unfit for the Future cannot justify BME because it ignores the nature of the problem of climate protection and political prerequisites of any solution.
The article is here.
The True Self: A psychological concept distinct from the self.
Strohminger N., Newman, G., and Knobe, J. (in press).
Perspectives on Psychological Science.
A long tradition of psychological research has explored the distinction between characteristics that are part of the self and those that lie outside of it. Recently, a surge of research has begun examining a further distinction. Even among characteristics that are internal to the self, people pick out a subset as belonging to the true self. These factors are judged as making people who they really are, deep down. In this paper, we introduce the concept of the true self and identify features that distinguish people’s
understanding of the true self from their understanding of the self more generally. In particular, we consider recent findings that the true self is perceived as positive and moral, and that this tendency is actor-observer invariant and cross-culturally stable. We then explore possible explanations for these findings and discuss their implications for a variety of issues in psychology.
The paper is here.
Perspectives on Psychological Science.
A long tradition of psychological research has explored the distinction between characteristics that are part of the self and those that lie outside of it. Recently, a surge of research has begun examining a further distinction. Even among characteristics that are internal to the self, people pick out a subset as belonging to the true self. These factors are judged as making people who they really are, deep down. In this paper, we introduce the concept of the true self and identify features that distinguish people’s
understanding of the true self from their understanding of the self more generally. In particular, we consider recent findings that the true self is perceived as positive and moral, and that this tendency is actor-observer invariant and cross-culturally stable. We then explore possible explanations for these findings and discuss their implications for a variety of issues in psychology.
The paper is here.
Wednesday, December 28, 2016
Oxytocin modulates third-party sanctioning of selfish and generous behavior within and between groups
Katie Daughters, Antony S.R. Manstead, Femke S. Ten Velden, Carsten K.W. De Dreu
Psychoneuroendocrinology, Available online 3 December 2016
Abstract
Human groups function because members trust each other and reciprocate cooperative contributions, and reward others’ cooperation and punish their non-cooperation. Here we examined the possibility that such third-party punishment and reward of others’ trust and reciprocation is modulated by oxytocin, a neuropeptide generally involved in social bonding and in-group (but not out-group) serving behavior. Healthy males and females (N = 100) self-administered a placebo or 24 IU of oxytocin in a randomized, double-blind, between-subjects design. Participants were asked to indicate (incentivized, costly) their level of reward or punishment for in-group (outgroup) investors donating generously or fairly to in-group (outgroup) trustees, who back-transferred generously, fairly or selfishly. Punishment (reward) was higher for selfish (generous) investments and back-transfers when (i) investors were in-group rather than outgroup, and (ii) trustees were in-group rather than outgroup, especially when (iii) participants received oxytocin rather than placebo. It follows, first, that oxytocin leads individuals to ignore out-groups as long as out-group behavior is not relevant to the in-group and, second, that oxytocin contributes to creating and enforcing in-group norms of cooperation and trust.
The article is here.
Psychoneuroendocrinology, Available online 3 December 2016
Abstract
Human groups function because members trust each other and reciprocate cooperative contributions, and reward others’ cooperation and punish their non-cooperation. Here we examined the possibility that such third-party punishment and reward of others’ trust and reciprocation is modulated by oxytocin, a neuropeptide generally involved in social bonding and in-group (but not out-group) serving behavior. Healthy males and females (N = 100) self-administered a placebo or 24 IU of oxytocin in a randomized, double-blind, between-subjects design. Participants were asked to indicate (incentivized, costly) their level of reward or punishment for in-group (outgroup) investors donating generously or fairly to in-group (outgroup) trustees, who back-transferred generously, fairly or selfishly. Punishment (reward) was higher for selfish (generous) investments and back-transfers when (i) investors were in-group rather than outgroup, and (ii) trustees were in-group rather than outgroup, especially when (iii) participants received oxytocin rather than placebo. It follows, first, that oxytocin leads individuals to ignore out-groups as long as out-group behavior is not relevant to the in-group and, second, that oxytocin contributes to creating and enforcing in-group norms of cooperation and trust.
The article is here.
Inference of trustworthiness from intuitive moral judgments
Everett JA., Pizarro DA., Crockett MJ.
Journal of Experimental Psychology: General, Vol 145(6), Jun 2016, 772-787.
Moral judgments play a critical role in motivating and enforcing human cooperation, and research on the proximate mechanisms of moral judgments highlights the importance of intuitive, automatic processes in forming such judgments. Intuitive moral judgments often share characteristics with deontological theories in normative ethics, which argue that certain acts (such as killing) are absolutely wrong, regardless of their consequences. Why do moral intuitions typically follow deontological prescriptions, as opposed to those of other ethical theories? Here, we test a functional explanation for this phenomenon by investigating whether agents who express deontological moral judgments are more valued as social partners. Across 5 studies, we show that people who make characteristically deontological judgments are preferred as social partners, perceived as more moral and trustworthy, and are trusted more in economic games. These findings provide empirical support for a partner choice account of moral intuitions whereby typically deontological judgments confer an adaptive function by increasing a person's likelihood of being chosen as a cooperation partner. Therefore, deontological moral intuitions may represent an evolutionarily prescribed prior that was selected for through partner choice mechanisms.
The article is here.
Journal of Experimental Psychology: General, Vol 145(6), Jun 2016, 772-787.
Moral judgments play a critical role in motivating and enforcing human cooperation, and research on the proximate mechanisms of moral judgments highlights the importance of intuitive, automatic processes in forming such judgments. Intuitive moral judgments often share characteristics with deontological theories in normative ethics, which argue that certain acts (such as killing) are absolutely wrong, regardless of their consequences. Why do moral intuitions typically follow deontological prescriptions, as opposed to those of other ethical theories? Here, we test a functional explanation for this phenomenon by investigating whether agents who express deontological moral judgments are more valued as social partners. Across 5 studies, we show that people who make characteristically deontological judgments are preferred as social partners, perceived as more moral and trustworthy, and are trusted more in economic games. These findings provide empirical support for a partner choice account of moral intuitions whereby typically deontological judgments confer an adaptive function by increasing a person's likelihood of being chosen as a cooperation partner. Therefore, deontological moral intuitions may represent an evolutionarily prescribed prior that was selected for through partner choice mechanisms.
The article is here.
Tuesday, December 27, 2016
Is Addiction a Brain Disease?
Kent C. Berridge
Neuroethics (2016). pp 1-5.
doi:10.1007/s12152-016-9286-3
Abstract
Where does normal brain or psychological function end, and pathology begin? The line can be hard to discern, making disease sometimes a tricky word. In addiction, normal ‘wanting’ processes become distorted and excessive, according to the incentive-sensitization theory. Excessive ‘wanting’ results from drug-induced neural sensitization changes in underlying brain mesolimbic systems of incentive. ‘Brain disease’ was never used by the theory, but neural sensitization changes are arguably extreme enough and problematic enough to be called pathological. This implies that ‘brain disease’ can be a legitimate description of addiction, though caveats are needed to acknowledge roles for choice and active agency by the addict. Finally, arguments over ‘brain disease’ should be put behind us. Our real challenge is to understand addiction and devise better ways to help. Arguments over descriptive words only distract from that challenge.
The article is here.
Neuroethics (2016). pp 1-5.
doi:10.1007/s12152-016-9286-3
Abstract
Where does normal brain or psychological function end, and pathology begin? The line can be hard to discern, making disease sometimes a tricky word. In addiction, normal ‘wanting’ processes become distorted and excessive, according to the incentive-sensitization theory. Excessive ‘wanting’ results from drug-induced neural sensitization changes in underlying brain mesolimbic systems of incentive. ‘Brain disease’ was never used by the theory, but neural sensitization changes are arguably extreme enough and problematic enough to be called pathological. This implies that ‘brain disease’ can be a legitimate description of addiction, though caveats are needed to acknowledge roles for choice and active agency by the addict. Finally, arguments over ‘brain disease’ should be put behind us. Our real challenge is to understand addiction and devise better ways to help. Arguments over descriptive words only distract from that challenge.
The article is here.
Artificial moral agents: creative, autonomous and social. An approach based on evolutionary computation
Ioan Muntean and Don Howard
Frontiers in Artificial Intelligence and Applications
Volume 273: Sociable Robots and the Future of Social Relations
Abstract
In this paper we propose a model of artificial normative agency that accommodates some social competencies that we expect from artificial moral agents. The artificial moral agent (AMA) discussed here is based on two components: (i) a version of virtue ethics of human agents (VE) adapted to artificial agents, called here “virtual virtue ethics” (VVE); and (ii) an implementation based on evolutionary computation (EC), more concretely genetic algorithms. The reasons to choose VVE and EC are related to two elements that are, we argue, central to any approach to artificial morality: autonomy and creativity. The greater the autonomy an artificial agent has, the more it needs moral standards. In the virtue ethics, each agent builds her own character in time; creativity comes in degrees as the individual becomes morally competent. The model of an autonomous and creative AMA thus implemented is called GAMA= Genetic(-inspired) Autonomous Moral Agent. First, unlike the majority of other implementations of machine ethics, our model is more agent-centered, than action-centered; it emphasizes the developmental and behavioral aspects of the ethical agent. Second, in our model, the AMA does not make decisions exclusively and directly by following rules or by calculating the best outcome of an action. The model incorporates rules as initial data (as the initial population of the genetic algorithms) or as correction factors, but not as the main structure of the algorithm. Third, our computational model is less conventional, or at least it does not fall within the Turing tradition in computation. Genetic algorithms are excellent searching tools that avoid local minima and generate solutions based on previous results. In the GAMA model, only prospective at this stage, the VVE approach to ethics is better implemented by EC. Finally, the GAMA agents can display sociability through competition among the best moral actions and the desire to win the competition. Both VVE and EC are more adequate to a “social approach” to AMA when compared to the standard approaches. The GAMA is more promising a “moral and social artificial agent”.
The article is here.
Frontiers in Artificial Intelligence and Applications
Volume 273: Sociable Robots and the Future of Social Relations
Abstract
In this paper we propose a model of artificial normative agency that accommodates some social competencies that we expect from artificial moral agents. The artificial moral agent (AMA) discussed here is based on two components: (i) a version of virtue ethics of human agents (VE) adapted to artificial agents, called here “virtual virtue ethics” (VVE); and (ii) an implementation based on evolutionary computation (EC), more concretely genetic algorithms. The reasons to choose VVE and EC are related to two elements that are, we argue, central to any approach to artificial morality: autonomy and creativity. The greater the autonomy an artificial agent has, the more it needs moral standards. In the virtue ethics, each agent builds her own character in time; creativity comes in degrees as the individual becomes morally competent. The model of an autonomous and creative AMA thus implemented is called GAMA= Genetic(-inspired) Autonomous Moral Agent. First, unlike the majority of other implementations of machine ethics, our model is more agent-centered, than action-centered; it emphasizes the developmental and behavioral aspects of the ethical agent. Second, in our model, the AMA does not make decisions exclusively and directly by following rules or by calculating the best outcome of an action. The model incorporates rules as initial data (as the initial population of the genetic algorithms) or as correction factors, but not as the main structure of the algorithm. Third, our computational model is less conventional, or at least it does not fall within the Turing tradition in computation. Genetic algorithms are excellent searching tools that avoid local minima and generate solutions based on previous results. In the GAMA model, only prospective at this stage, the VVE approach to ethics is better implemented by EC. Finally, the GAMA agents can display sociability through competition among the best moral actions and the desire to win the competition. Both VVE and EC are more adequate to a “social approach” to AMA when compared to the standard approaches. The GAMA is more promising a “moral and social artificial agent”.
The article is here.
Monday, December 26, 2016
Changing Memories: Between Ethics and Speculation
Eric Racine and William Affleck
AMA Journal of Ethics. December 2016, Volume 18, Number 12: 1241-1248.
doi: 10.1001/journalofethics.2016.18.12.sect1-1612.
Abstract
Over the past decade, a debate has emerged between those who believe that memory-modulating technologies are inherently dangerous and need to be regulated and those who believe these technologies present minimal risk and thus view concerns about their use as far-fetched and alarmist. This article tackles three questions central to this debate: (1) Do these technologies jeopardize personhood? (2) Are the risks of these technologies acceptable? (3) Do these technologies require special regulation or oversight? Although concerns about the unethical use of memory-modulating technologies are legitimate, these concerns should not override the responsible use of memory-modulating technologies in clinical contexts. Accordingly, we call for careful comparative analysis of their use on a case-by-case basis.
The article is here.
AMA Journal of Ethics. December 2016, Volume 18, Number 12: 1241-1248.
doi: 10.1001/journalofethics.2016.18.12.sect1-1612.
Abstract
Over the past decade, a debate has emerged between those who believe that memory-modulating technologies are inherently dangerous and need to be regulated and those who believe these technologies present minimal risk and thus view concerns about their use as far-fetched and alarmist. This article tackles three questions central to this debate: (1) Do these technologies jeopardize personhood? (2) Are the risks of these technologies acceptable? (3) Do these technologies require special regulation or oversight? Although concerns about the unethical use of memory-modulating technologies are legitimate, these concerns should not override the responsible use of memory-modulating technologies in clinical contexts. Accordingly, we call for careful comparative analysis of their use on a case-by-case basis.
The article is here.
Reframing Research Ethics: Towards a Professional Ethics for the Social Sciences
Nathan Emmerich
Sociological Research Online, 21 (4), 7
DOI: 10.5153/sro.4127
Abstract
This article is premised on the idea that were we able to articulate a positive vision of the social scientist's professional ethics, this would enable us to reframe social science research ethics as something internal to the profession. As such, rather than suffering under the imperialism of a research ethics constructed for the purposes of governing biomedical research, social scientists might argue for ethical self-regulation with greater force. I seek to provide the requisite basis for such an 'ethics' by, first, suggesting that the conditions which gave rise to biomedical research ethics are not replicated within the social sciences. Second, I argue that social science research can be considered as the moral equivalent of the 'true professions.' Not only does it have an ultimate end, but it is one that is – or, at least, should be – shared by the state and society as a whole. I then present a reading of confidentiality as a methodological – and not simply ethical – aspect of research, one that offers further support for the view that social scientists should attend to their professional ethics and the internal standards of their disciplines, rather than the contemporary discourse of research ethics that is rooted in the bioethical literature. Finally, and by way of a conclusion, I consider the consequences of the idea that social scientists should adopt a professional ethics and propose that the Clinical Ethics Committee might provide an alternative model for the governance of social science research.
The article is here.
Sociological Research Online, 21 (4), 7
DOI: 10.5153/sro.4127
Abstract
This article is premised on the idea that were we able to articulate a positive vision of the social scientist's professional ethics, this would enable us to reframe social science research ethics as something internal to the profession. As such, rather than suffering under the imperialism of a research ethics constructed for the purposes of governing biomedical research, social scientists might argue for ethical self-regulation with greater force. I seek to provide the requisite basis for such an 'ethics' by, first, suggesting that the conditions which gave rise to biomedical research ethics are not replicated within the social sciences. Second, I argue that social science research can be considered as the moral equivalent of the 'true professions.' Not only does it have an ultimate end, but it is one that is – or, at least, should be – shared by the state and society as a whole. I then present a reading of confidentiality as a methodological – and not simply ethical – aspect of research, one that offers further support for the view that social scientists should attend to their professional ethics and the internal standards of their disciplines, rather than the contemporary discourse of research ethics that is rooted in the bioethical literature. Finally, and by way of a conclusion, I consider the consequences of the idea that social scientists should adopt a professional ethics and propose that the Clinical Ethics Committee might provide an alternative model for the governance of social science research.
The article is here.
Sunday, December 25, 2016
Excerpt from Stanley Kubrick's Playboy Interview 1968
Playboy, 1968
Playboy: If life is so purposeless, do you feel it’s worth living?
Kubrick: Yes, for those who manage somehow to cope with our mortality. The very meaninglessness of life forces a man to create his own meaning. Children, of course, begin life with an untarnished sense of wonder, a capacity to experience total joy at something as simple as the greenness of a leaf; but as they grow older, the awareness of death and decay begins to impinge on their consciousness and subtly erode their joie de vivre (a keen enjoyment of living), their idealism - and their assumption of immortality.
As a child matures, he sees death and pain everywhere about him, and begins to lose faith in the ultimate goodness of man. But if he’s reasonably strong - and lucky - he can emerge from this twilight of the soul into a rebirth of life’s élan (enthusiastic and assured vigour and liveliness).
Both because of and in spite of his awareness of the meaninglessness of life, he can forge a fresh sense of purpose and affirmation. He may not recapture the same pure sense of wonder he was born with, but he can shape something far more enduring and sustaining.
The most terrifying fact about the universe is not that it is hostile but that it is indifferent; but if we can come to terms with this indifference and accept the challenges of life within the boundaries of death - however mutable man may be able to make them - our existence as a species can have genuine meaning and fulfilment. However vast the darkness, we must supply our own light.
The entire interview is here.
Playboy: If life is so purposeless, do you feel it’s worth living?
Kubrick: Yes, for those who manage somehow to cope with our mortality. The very meaninglessness of life forces a man to create his own meaning. Children, of course, begin life with an untarnished sense of wonder, a capacity to experience total joy at something as simple as the greenness of a leaf; but as they grow older, the awareness of death and decay begins to impinge on their consciousness and subtly erode their joie de vivre (a keen enjoyment of living), their idealism - and their assumption of immortality.
As a child matures, he sees death and pain everywhere about him, and begins to lose faith in the ultimate goodness of man. But if he’s reasonably strong - and lucky - he can emerge from this twilight of the soul into a rebirth of life’s élan (enthusiastic and assured vigour and liveliness).
Both because of and in spite of his awareness of the meaninglessness of life, he can forge a fresh sense of purpose and affirmation. He may not recapture the same pure sense of wonder he was born with, but he can shape something far more enduring and sustaining.
The most terrifying fact about the universe is not that it is hostile but that it is indifferent; but if we can come to terms with this indifference and accept the challenges of life within the boundaries of death - however mutable man may be able to make them - our existence as a species can have genuine meaning and fulfilment. However vast the darkness, we must supply our own light.
The entire interview is here.
Saturday, December 24, 2016
The Adaptive Utility of Deontology: Deontological Moral Decision-Making Fosters Perceptions of Trust and Likeability
Sacco, D.F., Brown, M., Lustgraaf, C.J.N. et al.
Evolutionary Psychological Science (2016).
doi:10.1007/s40806-016-0080-6
Abstract
Although various motives underlie moral decision-making, recent research suggests that deontological moral decision-making may have evolved, in part, to communicate trustworthiness to conspecifics, thereby facilitating cooperative relations. Specifically, social actors whose decisions are guided by deontological (relative to utilitarian) moral reasoning are judged as more trustworthy, are preferred more as social partners, and are trusted more in economic games. The current study extends this research by using an alternative manipulation of moral decision-making as well as the inclusion of target facial identities to explore the potential role of participant and target sex in reactions to moral decisions. Participants viewed a series of male and female targets, half of whom were manipulated to either have responded to five moral dilemmas consistent with an underlying deontological motive or utilitarian motive; participants indicated their liking and trust toward each target. Consistent with previous research, participants liked and trusted targets whose decisions were consistent with deontological motives more than targets whose decisions were more consistent with utilitarian motives; this effect was stronger for perceptions of trust. Additionally, women reported greater dislike for targets whose decisions were consistent with utilitarianism than men. Results suggest that deontological moral reasoning evolved, in part, to facilitate positive relations among conspecifics and aid group living and that women may be particularly sensitive to the implications of the various motives underlying moral decision-making.
The research is here.
Editor's Note: This research may apply to psychotherapy, leadership style, and politics.
Evolutionary Psychological Science (2016).
doi:10.1007/s40806-016-0080-6
Abstract
Although various motives underlie moral decision-making, recent research suggests that deontological moral decision-making may have evolved, in part, to communicate trustworthiness to conspecifics, thereby facilitating cooperative relations. Specifically, social actors whose decisions are guided by deontological (relative to utilitarian) moral reasoning are judged as more trustworthy, are preferred more as social partners, and are trusted more in economic games. The current study extends this research by using an alternative manipulation of moral decision-making as well as the inclusion of target facial identities to explore the potential role of participant and target sex in reactions to moral decisions. Participants viewed a series of male and female targets, half of whom were manipulated to either have responded to five moral dilemmas consistent with an underlying deontological motive or utilitarian motive; participants indicated their liking and trust toward each target. Consistent with previous research, participants liked and trusted targets whose decisions were consistent with deontological motives more than targets whose decisions were more consistent with utilitarian motives; this effect was stronger for perceptions of trust. Additionally, women reported greater dislike for targets whose decisions were consistent with utilitarianism than men. Results suggest that deontological moral reasoning evolved, in part, to facilitate positive relations among conspecifics and aid group living and that women may be particularly sensitive to the implications of the various motives underlying moral decision-making.
The research is here.
Editor's Note: This research may apply to psychotherapy, leadership style, and politics.
Friday, December 23, 2016
Hiding true emotions: micro-expressions in eyes retrospectively concealed by mouth movements
Miho Iwasaki & Yasuki Noguchi
Scientific Reports 6, Article number: 22049 (2016)
doi:10.1038/srep22049
Abstract
When we encounter someone we dislike, we may momentarily display a reflexive disgust expression, only to follow-up with a forced smile and greeting. Our daily lives are replete with a mixture of true and fake expressions. Nevertheless, are these fake expressions really effective at hiding our true emotions? Here we show that brief emotional changes in the eyes (micro-expressions, thought to reflect true emotions) can be successfully concealed by follow-up mouth movements (e.g. a smile). In the same manner as backward masking, mouth movements of a face inhibited conscious detection of all types of micro-expressions in that face, even when viewers paid full attention to the eye region. This masking works only in a backward direction, however, because no disrupting effect was observed when the mouth change preceded the eye change. These results provide scientific evidence for everyday behaviours like smiling to dissemble, and further clarify a major reason for the difficulty we face in discriminating genuine from fake emotional expressions.
The article is here.
Editor's note: This research may apply to transference and countertransference reactions in psychotherapy.
Scientific Reports 6, Article number: 22049 (2016)
doi:10.1038/srep22049
Abstract
When we encounter someone we dislike, we may momentarily display a reflexive disgust expression, only to follow-up with a forced smile and greeting. Our daily lives are replete with a mixture of true and fake expressions. Nevertheless, are these fake expressions really effective at hiding our true emotions? Here we show that brief emotional changes in the eyes (micro-expressions, thought to reflect true emotions) can be successfully concealed by follow-up mouth movements (e.g. a smile). In the same manner as backward masking, mouth movements of a face inhibited conscious detection of all types of micro-expressions in that face, even when viewers paid full attention to the eye region. This masking works only in a backward direction, however, because no disrupting effect was observed when the mouth change preceded the eye change. These results provide scientific evidence for everyday behaviours like smiling to dissemble, and further clarify a major reason for the difficulty we face in discriminating genuine from fake emotional expressions.
The article is here.
Editor's note: This research may apply to transference and countertransference reactions in psychotherapy.
When A.I. Matures, It May Call Jürgen Schmidhuber ‘Dad’
John Markoff
The New York Times
Originally posted November 27, 2016
Here is an excerpt:
Dr. Schmidhuber also has a grand vision for A.I. — that self-aware or “conscious machines” are just around the corner — that causes eyes to roll among some of his peers. To put a fine point on the debate: Is artificial intelligence an engineering discipline, or a godlike field on the cusp of creating a new superintelligent species?
Dr. Schmidhuber is firmly in the god camp. He maintains that the basic concepts for such technologies already exist, and that there is nothing magical about human consciousness. “Generally speaking, consciousness and self-awareness are overrated,” he said, arguing that machine consciousness will emerge from more powerful computers and software algorithms much like those he has already designed.
It’s been an obsession since he was a teenager in Germany reading science fiction.
“As I grew up I kept asking myself, ‘What’s the maximum impact I could have?’” Dr. Schmidhuber recalled. “And it became clear to me that it’s to build something smarter than myself, which will build something even smarter, et cetera, et cetera, and eventually colonize and transform the universe, and make it intelligent.”
The article is here.
The New York Times
Originally posted November 27, 2016
Here is an excerpt:
Dr. Schmidhuber also has a grand vision for A.I. — that self-aware or “conscious machines” are just around the corner — that causes eyes to roll among some of his peers. To put a fine point on the debate: Is artificial intelligence an engineering discipline, or a godlike field on the cusp of creating a new superintelligent species?
Dr. Schmidhuber is firmly in the god camp. He maintains that the basic concepts for such technologies already exist, and that there is nothing magical about human consciousness. “Generally speaking, consciousness and self-awareness are overrated,” he said, arguing that machine consciousness will emerge from more powerful computers and software algorithms much like those he has already designed.
It’s been an obsession since he was a teenager in Germany reading science fiction.
“As I grew up I kept asking myself, ‘What’s the maximum impact I could have?’” Dr. Schmidhuber recalled. “And it became clear to me that it’s to build something smarter than myself, which will build something even smarter, et cetera, et cetera, and eventually colonize and transform the universe, and make it intelligent.”
The article is here.
Thursday, December 22, 2016
Hard Time or Hospital Treatment? Mental Illness and the Criminal Justice System
Christine Montross
The New England Journal of Medicine
2016; 375:1407-1409
Here is an excerpt:
When law enforcement is involved, the trajectory of my patients’ lives veers sharply. The consequences are unpredictable and range from stability and safety to unmitigated disaster. When patients are ill or afraid enough to be potentially assaultive, the earliest decision as to whether they belong in jail or in the hospital may shape the course of the next many years of their lives.
It’s now well understood that the closing of state hospitals in the 1970s and 1980s led to the containment of mentally ill people in correctional facilities. Today our jails and state prisons contain an estimated 356,000 inmates with serious mental illness, while only about 35,000 people with serious mental illness are being treated in state hospitals — stark evidence of the decimation of the public mental health system.
When a mentally ill person comes into contact with the criminal justice system, the decision about whether that person belongs in jail or in the hospital is rarely a clinical one. Instead, it’s made by the gatekeepers of the legal system: police officers, prosecutors, and judges. The poor, members of minority groups, and people with a history of law-enforcement involvement are shuttled into the correctional system in disproportionate numbers; they are more likely to be arrested and less likely than their more privileged counterparts to be adequately treated for their psychiatric illnesses.
The article is here.
The New England Journal of Medicine
2016; 375:1407-1409
Here is an excerpt:
When law enforcement is involved, the trajectory of my patients’ lives veers sharply. The consequences are unpredictable and range from stability and safety to unmitigated disaster. When patients are ill or afraid enough to be potentially assaultive, the earliest decision as to whether they belong in jail or in the hospital may shape the course of the next many years of their lives.
It’s now well understood that the closing of state hospitals in the 1970s and 1980s led to the containment of mentally ill people in correctional facilities. Today our jails and state prisons contain an estimated 356,000 inmates with serious mental illness, while only about 35,000 people with serious mental illness are being treated in state hospitals — stark evidence of the decimation of the public mental health system.
When a mentally ill person comes into contact with the criminal justice system, the decision about whether that person belongs in jail or in the hospital is rarely a clinical one. Instead, it’s made by the gatekeepers of the legal system: police officers, prosecutors, and judges. The poor, members of minority groups, and people with a history of law-enforcement involvement are shuttled into the correctional system in disproportionate numbers; they are more likely to be arrested and less likely than their more privileged counterparts to be adequately treated for their psychiatric illnesses.
The article is here.
Lawsuit Aims to Hold 2 Contractors Accountable for C.I.A. Torture
By Sheri Fink and James Risen
The New York Times
Originally posted on November 27, 2016
Here is an excerpt:
Dr. Mitchell was first publicly identified as one of the architects of the C.I.A.’s “enhanced interrogation” program nearly a decade ago, and has given some news media interviews, but is now providing a more detailed account of his involvement. His book, “Enhanced Interrogation: Inside the Minds and Motives of the Islamic Terrorists Trying to Destroy America” (Crown Forum), was written with Bill Harlow, a former C.I.A. spokesman. It was reviewed by the agency before release. (The New York Times obtained a copy of the book before its publication date.)
In the book, Dr. Mitchell alleges that harsh interrogation techniques he devised and carried out, based on those he used as an Air Force trainer in survival schools to prepare airmen if they became prisoners of war, protected the detainees from even worse abuse by the C.I.A.
Dr. Mitchell wrote that he and Dr. Jessen sequestered prisoners in closed boxes, forced them to hold painful positions for hours and prevented them from sleeping for days. He also takes credit for suggesting and implementing waterboarding — covering a detainee’s face with a cloth and pouring water over it to simulate the sensation of drowning — among other now-banned techniques. “Although they were unpleasant, their use protected detainees from being subjected to unproven and perhaps harsher techniques made up on the fly that could have been much worse,” he wrote. C.I.A. officers, he added, “had already decided to get rough.”
The article is here.
Editor's note: If you think torture works, please read: Why Torture Doesn’t Work: The Neuroscience of Interrogation, by Shane O'Mara.
The New York Times
Originally posted on November 27, 2016
Here is an excerpt:
Dr. Mitchell was first publicly identified as one of the architects of the C.I.A.’s “enhanced interrogation” program nearly a decade ago, and has given some news media interviews, but is now providing a more detailed account of his involvement. His book, “Enhanced Interrogation: Inside the Minds and Motives of the Islamic Terrorists Trying to Destroy America” (Crown Forum), was written with Bill Harlow, a former C.I.A. spokesman. It was reviewed by the agency before release. (The New York Times obtained a copy of the book before its publication date.)
In the book, Dr. Mitchell alleges that harsh interrogation techniques he devised and carried out, based on those he used as an Air Force trainer in survival schools to prepare airmen if they became prisoners of war, protected the detainees from even worse abuse by the C.I.A.
Dr. Mitchell wrote that he and Dr. Jessen sequestered prisoners in closed boxes, forced them to hold painful positions for hours and prevented them from sleeping for days. He also takes credit for suggesting and implementing waterboarding — covering a detainee’s face with a cloth and pouring water over it to simulate the sensation of drowning — among other now-banned techniques. “Although they were unpleasant, their use protected detainees from being subjected to unproven and perhaps harsher techniques made up on the fly that could have been much worse,” he wrote. C.I.A. officers, he added, “had already decided to get rough.”
The article is here.
Editor's note: If you think torture works, please read: Why Torture Doesn’t Work: The Neuroscience of Interrogation, by Shane O'Mara.
Wednesday, December 21, 2016
Empathy, Schmempathy
By Tom Bartlett
The Chronicle of Higher Education
Originally posted November 27, 2016
No one argues in favor of empathy. That’s because no one needs to: Empathy is an unalloyed good, like sunshine or cake or free valet parking. Instead we bemoan lack of empathy and nod our heads at the notion that, if only we could feel the pain of our fellow man, then everything would be OK and humanity could, at long last, join hands together in song.
Bah, says Paul Bloom. In his new book, Against Empathy: The Case for Rational Compassion (Ecco), Bloom argues that when it comes to helping one another, our emotions too often spoil everything. Instead of leading us to make smart decisions about how best to use our limited resources altruistically, they cause us to focus on what makes us feel good in the moment. We worry about the boy stuck in the well rather than the thousands of boys dying of malnutrition every day.
Bloom, a professor of psychology at Yale University, calls on us to feel less and think more.
The interview is here.
The Chronicle of Higher Education
Originally posted November 27, 2016
No one argues in favor of empathy. That’s because no one needs to: Empathy is an unalloyed good, like sunshine or cake or free valet parking. Instead we bemoan lack of empathy and nod our heads at the notion that, if only we could feel the pain of our fellow man, then everything would be OK and humanity could, at long last, join hands together in song.
Bah, says Paul Bloom. In his new book, Against Empathy: The Case for Rational Compassion (Ecco), Bloom argues that when it comes to helping one another, our emotions too often spoil everything. Instead of leading us to make smart decisions about how best to use our limited resources altruistically, they cause us to focus on what makes us feel good in the moment. We worry about the boy stuck in the well rather than the thousands of boys dying of malnutrition every day.
Bloom, a professor of psychology at Yale University, calls on us to feel less and think more.
The interview is here.
The Case Against Reality
Amanda Gefter
The Atlantic
Originally published April 25, 2016
Here is an excerpt:
Not so, says Donald D. Hoffman, a professor of cognitive science at the University of California, Irvine. Hoffman has spent the past three decades studying perception, artificial intelligence, evolutionary game theory and the brain, and his conclusion is a dramatic one: The world presented to us by our perceptions is nothing like reality. What’s more, he says, we have evolution itself to thank for this magnificent illusion, as it maximizes evolutionary fitness by driving truth to extinction.
Getting at questions about the nature of reality, and disentangling the observer from the observed, is an endeavor that straddles the boundaries of neuroscience and fundamental physics. On one side you’ll find researchers scratching their chins raw trying to understand how a three-pound lump of gray matter obeying nothing more than the ordinary laws of physics can give rise to first-person conscious experience. This is the aptly named “hard problem.”
On the other side are quantum physicists, marveling at the strange fact that quantum systems don’t seem to be definite objects localized in space until we come along to observe them. Experiment after experiment has shown—defying common sense—that if we assume that the particles that make up ordinary objects have an objective, observer-independent existence, we get the wrong answers. The central lesson of quantum physics is clear: There are no public objects sitting out there in some preexisting space. As the physicist John Wheeler put it, “Useful as it is under ordinary circumstances to say that the world exists ‘out there’ independent of us, that view can no longer be upheld.”
The article is here.
The Atlantic
Originally published April 25, 2016
Here is an excerpt:
Not so, says Donald D. Hoffman, a professor of cognitive science at the University of California, Irvine. Hoffman has spent the past three decades studying perception, artificial intelligence, evolutionary game theory and the brain, and his conclusion is a dramatic one: The world presented to us by our perceptions is nothing like reality. What’s more, he says, we have evolution itself to thank for this magnificent illusion, as it maximizes evolutionary fitness by driving truth to extinction.
Getting at questions about the nature of reality, and disentangling the observer from the observed, is an endeavor that straddles the boundaries of neuroscience and fundamental physics. On one side you’ll find researchers scratching their chins raw trying to understand how a three-pound lump of gray matter obeying nothing more than the ordinary laws of physics can give rise to first-person conscious experience. This is the aptly named “hard problem.”
On the other side are quantum physicists, marveling at the strange fact that quantum systems don’t seem to be definite objects localized in space until we come along to observe them. Experiment after experiment has shown—defying common sense—that if we assume that the particles that make up ordinary objects have an objective, observer-independent existence, we get the wrong answers. The central lesson of quantum physics is clear: There are no public objects sitting out there in some preexisting space. As the physicist John Wheeler put it, “Useful as it is under ordinary circumstances to say that the world exists ‘out there’ independent of us, that view can no longer be upheld.”
The article is here.
Tuesday, December 20, 2016
Glitches: A Conversation With Laurie R. Santos
Edge.org
Originally posted November 27, 2016
Here is an excerpt of the article/video:
Scholars like Kahneman, Thaler, and folks who think about the glitches of the human mind have been interested in the kind of animal work that we do, in part because the animal work has this important window into where these glitches come from. We find that capuchin monkeys have the same glitches we've seen in humans. We've seen the standard classic economic biases that Kahneman and Tversky found in humans in capuchin monkeys, things like loss aversion and reference dependence. They have those biases in spades.
That tells us something about how those biases work. That tells us those are old biases. They're not built for current economic markets. They're not built for systems dealing with money. There's something fundamental about the way we make sense of choices in the world, and if you're going to attack them and try to override them, you have to do it in a way that's honest about the fact that those biases are going to be way too deep.
If you are a Bob Cialdini and you're interested in the extent to which we get messed up by the information we hear that other people are doing, and you learn that it's just us—chimpanzees don't fall prey to that—you learn something interesting about how those biases work. This is something that we have under the hood that's operating off mechanisms that are not old, which we might be able to harness in a very different way than we would have for solving something like loss aversion.
What I've found is that when the Kahnemans and the Cialdinis of the world hear about the animal work, both in cases where animals are similar to humans and in cases where animals are different, they get pretty excited. They get excited because it's telling them something, not because they care about capuchins or dogs. They get excited because they care about humans, and the animal work has allowed us to get some insight into how humans tick, particularly when it comes to their biases.
The text/video is here.
Originally posted November 27, 2016
Here is an excerpt of the article/video:
Scholars like Kahneman, Thaler, and folks who think about the glitches of the human mind have been interested in the kind of animal work that we do, in part because the animal work has this important window into where these glitches come from. We find that capuchin monkeys have the same glitches we've seen in humans. We've seen the standard classic economic biases that Kahneman and Tversky found in humans in capuchin monkeys, things like loss aversion and reference dependence. They have those biases in spades.
That tells us something about how those biases work. That tells us those are old biases. They're not built for current economic markets. They're not built for systems dealing with money. There's something fundamental about the way we make sense of choices in the world, and if you're going to attack them and try to override them, you have to do it in a way that's honest about the fact that those biases are going to be way too deep.
If you are a Bob Cialdini and you're interested in the extent to which we get messed up by the information we hear that other people are doing, and you learn that it's just us—chimpanzees don't fall prey to that—you learn something interesting about how those biases work. This is something that we have under the hood that's operating off mechanisms that are not old, which we might be able to harness in a very different way than we would have for solving something like loss aversion.
What I've found is that when the Kahnemans and the Cialdinis of the world hear about the animal work, both in cases where animals are similar to humans and in cases where animals are different, they get pretty excited. They get excited because it's telling them something, not because they care about capuchins or dogs. They get excited because they care about humans, and the animal work has allowed us to get some insight into how humans tick, particularly when it comes to their biases.
The text/video is here.
The Role of Emotional Intuitions in Moral Judgments and Decisions
Gee, Catherine. 2014.
Journal of Cognition and Neuroethics 2 (1): 161–171.
Abstract
Joshua D. Greene asserts in his 2007 article “The Secret Joke of Kant’s Soul” that consequentialism is the superior moral theory compared to deontology due to its judgments arising from “cognitive” processes alone without (or very little) input from emotive processes. However, I disagree with Greene’s position and instead argue it is the combination of rational and emotive cognitive processes that are the key to forming a moral judgment. Studies on patients who suffered damage to their ventromedial prefrontal cortex will be discussed as they are real-life examples of individuals who, due to brain damage, make moral judgments based predominately on “cognitive” processes. These examples will demonstrate that the results of isolated “cognitive” mental processing are hardly what Greene envisioned. Instead of superior processing and judgments, these individuals show significant impairment. As such, Greene’s account ought to be dismissed for does not stand up to philosophical scrutiny or the psychological literature on this topic.
The article is here.
Journal of Cognition and Neuroethics 2 (1): 161–171.
Abstract
Joshua D. Greene asserts in his 2007 article “The Secret Joke of Kant’s Soul” that consequentialism is the superior moral theory compared to deontology due to its judgments arising from “cognitive” processes alone without (or very little) input from emotive processes. However, I disagree with Greene’s position and instead argue it is the combination of rational and emotive cognitive processes that are the key to forming a moral judgment. Studies on patients who suffered damage to their ventromedial prefrontal cortex will be discussed as they are real-life examples of individuals who, due to brain damage, make moral judgments based predominately on “cognitive” processes. These examples will demonstrate that the results of isolated “cognitive” mental processing are hardly what Greene envisioned. Instead of superior processing and judgments, these individuals show significant impairment. As such, Greene’s account ought to be dismissed for does not stand up to philosophical scrutiny or the psychological literature on this topic.
The article is here.
Monday, December 19, 2016
Normality: Part descriptive, part prescriptive
Adam Bear and Joshua Knobe
Cognition
Published October 24, 2016
Abstract
People’s beliefs about normality play an important role in many aspects of cognition and life (e.g., causal cognition, linguistic semantics, cooperative behavior). But how do people determine what sorts of things are normal in the first place? Past research has studied both people’s representations of statistical norms (e.g., the average) and their representations of prescriptive norms (e.g., the ideal). Four studies suggest that people’s notion of normality incorporates both of these types of norms. In particular, people’s representations of what is normal were found to be influenced both by what they believed to be descriptively average and by what they believed to be prescriptively ideal. This is shown across three domains: people’s use of the word ‘‘normal” (Study 1), their use of gradable adjectives (Study 2), and their judgments of concept prototypicality (Study 3). A final study investigated the learning of normality for a novel category, showing that people actively combine statistical and prescriptive information they have learned into an undifferentiated notion of what is normal (Study 4). Taken together, these findings may help to explain how moral norms impact the acquisition of normality and, conversely, how normality impacts the acquisition of moral norms.
The article is here.
Cognition
Published October 24, 2016
Abstract
People’s beliefs about normality play an important role in many aspects of cognition and life (e.g., causal cognition, linguistic semantics, cooperative behavior). But how do people determine what sorts of things are normal in the first place? Past research has studied both people’s representations of statistical norms (e.g., the average) and their representations of prescriptive norms (e.g., the ideal). Four studies suggest that people’s notion of normality incorporates both of these types of norms. In particular, people’s representations of what is normal were found to be influenced both by what they believed to be descriptively average and by what they believed to be prescriptively ideal. This is shown across three domains: people’s use of the word ‘‘normal” (Study 1), their use of gradable adjectives (Study 2), and their judgments of concept prototypicality (Study 3). A final study investigated the learning of normality for a novel category, showing that people actively combine statistical and prescriptive information they have learned into an undifferentiated notion of what is normal (Study 4). Taken together, these findings may help to explain how moral norms impact the acquisition of normality and, conversely, how normality impacts the acquisition of moral norms.
The article is here.
Colorado Voters Approve Aid-In-Dying Measure
John Daley
National Public Radio
Originally published November 10, 2016
Colorado has joined the handful of states that allow terminally ill patients to end their lives with medicine prescribed by a doctor.
Voters passed Proposition 106 by a 65 percent to 35 percent margin.
The fight pitted those who think the terminally ill should have the choice to end their lives if they choose to do so against those who think it's morally wrong and that people might be pressured into ending their lives.
(cut)
Under the Colorado measure, two doctors must agree a terminally ill adult has six months or less to live and is mentally competent. The person would self-administer the drug.
The article is here.
National Public Radio
Originally published November 10, 2016
Colorado has joined the handful of states that allow terminally ill patients to end their lives with medicine prescribed by a doctor.
Voters passed Proposition 106 by a 65 percent to 35 percent margin.
The fight pitted those who think the terminally ill should have the choice to end their lives if they choose to do so against those who think it's morally wrong and that people might be pressured into ending their lives.
(cut)
Under the Colorado measure, two doctors must agree a terminally ill adult has six months or less to live and is mentally competent. The person would self-administer the drug.
The article is here.
Sunday, December 18, 2016
There may be no worse place for mentally ill people to receive treatment than prison
By The Spotlight Team
The Boston Globe
Originally posted November 25, 2016
Here is an excerpt:
Last year, more than 15,000 prisoners walked out of Massachusetts jails and prisons. More than one-third suffer from mental illness; more than half have a history of addiction. Thousands are coping with both kinds of disorders, their risk of problems amplified as they reenter society.
Within three years of being released, 37 percent of inmates who leave state prisons with mental illnesses are locked up again, compared with 30 percent of those who do not have mental health problems, according to a Department of Correction analysis of 2012 releases. Inmates battling addiction fare worse: About half are convicted of a new crime within three years, according to one state study. And inmates with a “dual diagnosis” of addiction and mental illness, like Nick Lynch, do the worst of all, national studies show.
Despite the vast need — and the potential payoff in reduced recidivism — mental health and substance abuse treatment for many Massachusetts inmates is chronically undermined by clinician shortages, shrinking access to medication, and the widespread use of segregation as discipline. The prison environment itself is a major obstacle to treatment: In a culture ruled by aggression and fear, the trust and openness required for therapy are exponentially harder to achieve.
And when their incarcerations end, many mentally ill and drug-addicted prisoners are sent back into the world without basic tools they need to succeed, such as ready access to medication, addiction counseling, or adequate support and oversight. Such omissions can be critical: The Harvard-led Boston Reentry Study found in 2014 that inmates with a mix of mental illness and addiction are significantly less likely than others to find stable housing, work income, and family support in the critical initial period after leaving prison, leaving them insecure, isolated, and at risk of falling into “diminished mental health, drug use and relapse.”
The article is here.
The Boston Globe
Originally posted November 25, 2016
Here is an excerpt:
Last year, more than 15,000 prisoners walked out of Massachusetts jails and prisons. More than one-third suffer from mental illness; more than half have a history of addiction. Thousands are coping with both kinds of disorders, their risk of problems amplified as they reenter society.
Within three years of being released, 37 percent of inmates who leave state prisons with mental illnesses are locked up again, compared with 30 percent of those who do not have mental health problems, according to a Department of Correction analysis of 2012 releases. Inmates battling addiction fare worse: About half are convicted of a new crime within three years, according to one state study. And inmates with a “dual diagnosis” of addiction and mental illness, like Nick Lynch, do the worst of all, national studies show.
Despite the vast need — and the potential payoff in reduced recidivism — mental health and substance abuse treatment for many Massachusetts inmates is chronically undermined by clinician shortages, shrinking access to medication, and the widespread use of segregation as discipline. The prison environment itself is a major obstacle to treatment: In a culture ruled by aggression and fear, the trust and openness required for therapy are exponentially harder to achieve.
And when their incarcerations end, many mentally ill and drug-addicted prisoners are sent back into the world without basic tools they need to succeed, such as ready access to medication, addiction counseling, or adequate support and oversight. Such omissions can be critical: The Harvard-led Boston Reentry Study found in 2014 that inmates with a mix of mental illness and addiction are significantly less likely than others to find stable housing, work income, and family support in the critical initial period after leaving prison, leaving them insecure, isolated, and at risk of falling into “diminished mental health, drug use and relapse.”
The article is here.
Saturday, December 17, 2016
Free Will and Autonomous Medical DecisionMaking
Matthew A. Butkus
Neuroethics 3 (1): 75–119.
Abstract
Modern medical ethics makes a series of assumptions about how patients and their care providers make decisions about forgoing treatment. These assumptions are based on a model of thought and cognition that does not reflect actual cognition—it has substituted an ideal moral agent for a practical one. Instead of a purely rational moral agent, current psychology and neuroscience have shown that decision-making reflects a number of different factors that must be considered when conceptualizing autonomy. Multiple classical and contemporary discussions of autonomy and decision-making are considered and synthesized into a model of cognitive autonomy. Four categories of autonomy criteria are proposed to reflect current research in cognitive psychology and common clinical issues.
The article is here.
Neuroethics 3 (1): 75–119.
Abstract
Modern medical ethics makes a series of assumptions about how patients and their care providers make decisions about forgoing treatment. These assumptions are based on a model of thought and cognition that does not reflect actual cognition—it has substituted an ideal moral agent for a practical one. Instead of a purely rational moral agent, current psychology and neuroscience have shown that decision-making reflects a number of different factors that must be considered when conceptualizing autonomy. Multiple classical and contemporary discussions of autonomy and decision-making are considered and synthesized into a model of cognitive autonomy. Four categories of autonomy criteria are proposed to reflect current research in cognitive psychology and common clinical issues.
The article is here.
Friday, December 16, 2016
Why moral companies do immoral things
Michael Skapinker
Financial Times
Originally published November 23, 2016
Here is an excerpt:
But I wondered about the “better than average” research cited above. Could the illusion of moral superiority apply to organisations as well as individuals? And could companies believe they were so superior morally that the occasional lapse into immorality did not matter much? The Royal Holloway researchers said they had recently conducted experiments examining just these issues and were preparing to publish the results. They had found that political groups with a sense of moral superiority felt justified in behaving aggressively towards opponents. In experiments, this meant denying them a monetary benefit.
“It isn’t difficult to imagine a similar scenario arising in a competitive organisational context. To the extent that employees may perceive their organisation to be morally superior to other organisations, they might feel licensed to ‘cut corners’ or behave somewhat unethically — for example, to give their organisation a competitive edge.
“These behaviours may be perceived as justified … or even ethical, insofar as they promote the goals of their morally superior organisation,” they told me.
The article is here.
Financial Times
Originally published November 23, 2016
Here is an excerpt:
But I wondered about the “better than average” research cited above. Could the illusion of moral superiority apply to organisations as well as individuals? And could companies believe they were so superior morally that the occasional lapse into immorality did not matter much? The Royal Holloway researchers said they had recently conducted experiments examining just these issues and were preparing to publish the results. They had found that political groups with a sense of moral superiority felt justified in behaving aggressively towards opponents. In experiments, this meant denying them a monetary benefit.
“It isn’t difficult to imagine a similar scenario arising in a competitive organisational context. To the extent that employees may perceive their organisation to be morally superior to other organisations, they might feel licensed to ‘cut corners’ or behave somewhat unethically — for example, to give their organisation a competitive edge.
“These behaviours may be perceived as justified … or even ethical, insofar as they promote the goals of their morally superior organisation,” they told me.
The article is here.
How a doctor convicted in drugs-for-sex case returned to practice
Danny Robbins
Atlantic Journal Constitution
Part of a series on Physical and Sexual Abuse
Here is an excerpt:
“The pimp with a prescription pad” is what one prosecutor called him during a trial in which it was revealed that more than 400 sexually explicit photos of female patients and other women had been discovered in his office.
In some states, where legislatures have enacted laws prohibiting doctors who commit certain crimes from practicing, Dekle’s career would be over. But in Georgia, where the law gives the medical board the discretion to license anyone it sees fit, he was back in practice two years after leaving prison.
More than a dozen years later, that decision still leads some to wonder what the board was thinking.
“It’s particularly damning that he was using his ability to write prescriptions to further his sexual activities,” said Chris Dorsey, the Georgia Bureau of Investigation agent who led the probe that sent Dekle to prison. “A doctor burglarizes a house and then pays his debt to society, could he be a good doctor? I could argue it both ways. But when you have someone who abused everything centering on a medical practice to victimize all these people, that’s really a separate issue.”
The article is here.
Atlantic Journal Constitution
Part of a series on Physical and Sexual Abuse
Here is an excerpt:
“The pimp with a prescription pad” is what one prosecutor called him during a trial in which it was revealed that more than 400 sexually explicit photos of female patients and other women had been discovered in his office.
In some states, where legislatures have enacted laws prohibiting doctors who commit certain crimes from practicing, Dekle’s career would be over. But in Georgia, where the law gives the medical board the discretion to license anyone it sees fit, he was back in practice two years after leaving prison.
More than a dozen years later, that decision still leads some to wonder what the board was thinking.
“It’s particularly damning that he was using his ability to write prescriptions to further his sexual activities,” said Chris Dorsey, the Georgia Bureau of Investigation agent who led the probe that sent Dekle to prison. “A doctor burglarizes a house and then pays his debt to society, could he be a good doctor? I could argue it both ways. But when you have someone who abused everything centering on a medical practice to victimize all these people, that’s really a separate issue.”
The article is here.
Thursday, December 15, 2016
Informed Consent in Organ Donation and Abandonment of the Dead-Donor Rule
Matthew Phillip Mead
Journal of Cognition and Neuroethics 3 (2): 47–56.
Abstract
There has been considerable discussion regarding the ethics of organ transplantation and the dead-donor rule (DDR). Much of the medical and philosophical literature reveals inherent difficulties in definitions of death and the appropriate time to begin organ procurement. In this essay, an argument is presented for abandoning the DDR and switching to a practice in which donors are informed of the conditions under which their organs will be removed, rather than the current practice of requiring a declaration of death. Informed organ donation consent (IODC) would allow for greater transparency in the organ procurement process and alleviate many of the ethical concerns raised in the literature today surrounding these practices. This has the potential to improve public trust of organ procurement and increase the numbers of donors.
The article is here.
Journal of Cognition and Neuroethics 3 (2): 47–56.
Abstract
There has been considerable discussion regarding the ethics of organ transplantation and the dead-donor rule (DDR). Much of the medical and philosophical literature reveals inherent difficulties in definitions of death and the appropriate time to begin organ procurement. In this essay, an argument is presented for abandoning the DDR and switching to a practice in which donors are informed of the conditions under which their organs will be removed, rather than the current practice of requiring a declaration of death. Informed organ donation consent (IODC) would allow for greater transparency in the organ procurement process and alleviate many of the ethical concerns raised in the literature today surrounding these practices. This has the potential to improve public trust of organ procurement and increase the numbers of donors.
The article is here.
How Well Does Your State Protect Patients?
By Carrie Teegardin
Atlantic Journal-Constitution
A series on Physicians and Abuse
Here is an excerpt:
In most states, doctors dominate medical licensing boards and have the authority to decide who is fit to practice medicine and who isn’t. Usually the laws do not restrict a board’s authority by mandating certain punishments for some types of violations. Many licensing boards — including Georgia’s — say that’s how it should be.
“Having a bold, bright line saying a felony equals this or that is not good policy,” said Bob Jeffery, executive director of the Georgia Composite Medical Board.
Jeffery said criminal courts punish offenders and civil courts can compensate victims. Medical regulators, he said, have a different role.
“A licensing board is charged with making sure a (doctor) is safe to practice and that patients are protected,” he said.
With no legal prohibition standing in the way in most states, doctor-dominated medical boards often decide that doctors busted for abusive or illegal behaviors can be rehabilitated and safely returned to exam rooms.
New Jersey licensed a doctor convicted of sexual offenses with four patients. Kansas licensed a doctor imprisoned in Ohio for a sexual offense involving a child; that doctor later lost his Kansas license after making anonymous obscene phone calls to patients. Utah licensed a doctor who didn’t contest misdemeanor charges of sexual battery for intentionally touching the genitals of patients, staff members and others.
The article is here.
Atlantic Journal-Constitution
A series on Physicians and Abuse
Here is an excerpt:
In most states, doctors dominate medical licensing boards and have the authority to decide who is fit to practice medicine and who isn’t. Usually the laws do not restrict a board’s authority by mandating certain punishments for some types of violations. Many licensing boards — including Georgia’s — say that’s how it should be.
“Having a bold, bright line saying a felony equals this or that is not good policy,” said Bob Jeffery, executive director of the Georgia Composite Medical Board.
Jeffery said criminal courts punish offenders and civil courts can compensate victims. Medical regulators, he said, have a different role.
“A licensing board is charged with making sure a (doctor) is safe to practice and that patients are protected,” he said.
With no legal prohibition standing in the way in most states, doctor-dominated medical boards often decide that doctors busted for abusive or illegal behaviors can be rehabilitated and safely returned to exam rooms.
New Jersey licensed a doctor convicted of sexual offenses with four patients. Kansas licensed a doctor imprisoned in Ohio for a sexual offense involving a child; that doctor later lost his Kansas license after making anonymous obscene phone calls to patients. Utah licensed a doctor who didn’t contest misdemeanor charges of sexual battery for intentionally touching the genitals of patients, staff members and others.
The article is here.
Wednesday, December 14, 2016
If Animals Have Rights, Should Robots?
Nathan Heller
The New Yorker
Originally published November 28, 2016
Here is an except:
This simple fact is responsible for centuries of ethical dispute. One Harambe activist might believe that killing a gorilla as a safeguard against losing human life is unjust due to our cognitive similarity: the way gorillas think is a lot like the way we think, so they merit a similar moral standing. Another might believe that gorillas get their standing from a cognitive dissimilarity: because of our advanced powers of reason, we are called to rise above the cat-eat-mouse game, to be special protectors of animals, from chickens to chimpanzees. (Both views also support untroubled omnivorism: we kill animals because we are but animals, or because our exceptionalism means that human interests win.) These beliefs, obviously opposed, mark our uncertainty about whether we’re rightful peers or masters among other entities with brains. “One does not meet oneself until one catches the reflection from an eye other than human,” the anthropologist and naturalist Loren Eiseley wrote. In confronting similarity and difference, we are forced to set the limits of our species’ moral reach.
Today, however, reckonings of that sort may come with a twist. In an automated world, the gaze that meets our own might not be organic at all. There’s a growing chance that it will belong to a robot: a new and ever more pervasive kind of independent mind. Traditionally, the serial abuse of Siri or violence toward driverless cars hasn’t stirred up Harambe-like alarm. But, if like-mindedness or mastery is our moral standard, why should artificial life with advanced brains and human guardianships be exempt? Until we can pinpoint animals’ claims on us, we won’t be clear about what we owe robots—or what they owe us.
Tuesday, December 13, 2016
Consciousness: The Underlying Problem
Conscious Entities
November 24, 2016
What is the problem about consciousness? A Royal Institution video with interesting presentations (part 2 another time).
Anil Seth presents a striking illusion and gives an optimistic view of the ability of science to tackle the problem; or maybe we just get on with the science anyway? The philosophers may ask good questions, but their answers have always been wrong.
Barry Smith says that’s because when the philosophers have sorted a subject out it moves over into science. One problem is that we tend to miss thinking about consciousness and think about its contents. Isn’t there a problem: to be aware of your own awareness changes it? I feel pain in my body, but could consciousness be in my ankle?
Chris Frith points out that actually only a small part of our mental activity has anything to do with consciousness, and in fact there is evidence to show that many of the things we think are controlled by conscious thought really are not: a vindication of Helmholtz’s idea of unconscious inference. Thinking about your thinking messes things up?
The video is here.
November 24, 2016
What is the problem about consciousness? A Royal Institution video with interesting presentations (part 2 another time).
Anil Seth presents a striking illusion and gives an optimistic view of the ability of science to tackle the problem; or maybe we just get on with the science anyway? The philosophers may ask good questions, but their answers have always been wrong.
Barry Smith says that’s because when the philosophers have sorted a subject out it moves over into science. One problem is that we tend to miss thinking about consciousness and think about its contents. Isn’t there a problem: to be aware of your own awareness changes it? I feel pain in my body, but could consciousness be in my ankle?
Chris Frith points out that actually only a small part of our mental activity has anything to do with consciousness, and in fact there is evidence to show that many of the things we think are controlled by conscious thought really are not: a vindication of Helmholtz’s idea of unconscious inference. Thinking about your thinking messes things up?
The video is here.
Of Tooth and Claw: Predator Self-Identifications Mediate Gender Differences in Interpersonal Arrogance
Robinson, M.D., Bair, J.L., Liu, T. et al. Sex Roles (2016).
Sex Roles, pp 1-15.
doi:10.1007/s11199-016-0706-y
Abstract
Men often score higher than women do on traits or tendencies marked by hostile dominance. The purpose of the present research was to contribute to an understanding of these gender differences. Four studies (total N = 494 U.S. undergraduates) administered a modified animal preference test in which participants could choose to be predator or prey animals, but not labeled as such. Men were consistently more interested in being predator animals than women were, displaying a sort of hostile dominance in their projective preferences. Predator self-identifications, in turn, mediated gender differences in outcomes related to hostile dominance. Studies 1 and 2 provided initial evidence for this model in the context of variations in interpersonal arrogance, and Studies 3 and 4 extended the model to nonverbal displays and daily life prosociality, respectively. The findings indicate that gender differences in hostile dominance are paralleled by gender differences in preferring to think about the self in predator-like terms. Accordingly, the findings provide new insights into aggressive forms of masculine behavior.
Sex Roles, pp 1-15.
doi:10.1007/s11199-016-0706-y
Abstract
Men often score higher than women do on traits or tendencies marked by hostile dominance. The purpose of the present research was to contribute to an understanding of these gender differences. Four studies (total N = 494 U.S. undergraduates) administered a modified animal preference test in which participants could choose to be predator or prey animals, but not labeled as such. Men were consistently more interested in being predator animals than women were, displaying a sort of hostile dominance in their projective preferences. Predator self-identifications, in turn, mediated gender differences in outcomes related to hostile dominance. Studies 1 and 2 provided initial evidence for this model in the context of variations in interpersonal arrogance, and Studies 3 and 4 extended the model to nonverbal displays and daily life prosociality, respectively. The findings indicate that gender differences in hostile dominance are paralleled by gender differences in preferring to think about the self in predator-like terms. Accordingly, the findings provide new insights into aggressive forms of masculine behavior.
Monday, December 12, 2016
Preventing Conflicts of Interest of NFL Team Physicians
Mark A. Rothstein
The Hastings Center Report
Originally posted November 21, 2016
Abstract
At least since the time of Hippocrates, the physician-patient relationship has been the paradigmatic ethical arrangement for the provision of medical care. Yet, a physician-patient relationship does not exist in every professional interaction involving physicians and individuals they examine or treat. There are several “third-party” relationships, mostly arising where the individual is not a patient and is merely being examined rather than treated, the individual does not select or pay the physician, and the physician's services are provided for the benefit of another party. Physicians who treat NFL players have a physician-patient relationship, but physicians who merely examine players to determine their health status have a third-party relationship. As described by Glenn Cohen et al., the problem is that typical NFL team doctors perform both functions, which leads to entrenched conflicts of interest. Although there are often disputes about treatment, the main point of contention between players and team physicians is the evaluation of injuries and the reporting of players’ health status to coaches and other team personnel. Cohen et al. present several thoughtful recommendations that deserve serious consideration. Rather than focusing on their specific recommendations, however, I would like to explain the rationale for two essential reform principles: the need to sever the responsibilities of treatment and evaluation by team physicians and the need to limit the amount of player medical information disclosed to teams.
Cryonics: hype, hope or hell?
Neera Bhatia & Julian Savulescu
The Conversation
Originally posted November 22, 2016
Here is an excerpt:
Is cryonics ethical?
There are some arguments in favour of cryonics, the simplest of which is one of free will and choice. As long as people are informed of the very small chance of success of future re-animation, and they are not being coerced, then their choice is an expression of their autonomy about how they wish to direct the disposal of their bodies and resources after death.
In this light, choosing cryonics can be seen as no different to choosing cremation or burial, albeit a much more expensive option.
However, this case raises several other ethical and problematic concerns. There is the issue of potentially exploiting vulnerable people. Some might argue vulnerable people are trading hype for hope.
But if we were to replace the science of cryonics with the promises of religious or spiritual healers made at the bedside of the dying – of earlier access to “eternal life” in return for large payments known as indulgences – would this be so different?
Serious regulatory problems ahead
Legal and ethical issues aside, there are other serious issues to consider.
How can dying people have confidence in the ability of a company to keep their remains intact? If the cryonic company were to cease operating because of financial difficulties, what would happen to the frozen body?
The article is here.
The Conversation
Originally posted November 22, 2016
Here is an excerpt:
Is cryonics ethical?
There are some arguments in favour of cryonics, the simplest of which is one of free will and choice. As long as people are informed of the very small chance of success of future re-animation, and they are not being coerced, then their choice is an expression of their autonomy about how they wish to direct the disposal of their bodies and resources after death.
In this light, choosing cryonics can be seen as no different to choosing cremation or burial, albeit a much more expensive option.
However, this case raises several other ethical and problematic concerns. There is the issue of potentially exploiting vulnerable people. Some might argue vulnerable people are trading hype for hope.
But if we were to replace the science of cryonics with the promises of religious or spiritual healers made at the bedside of the dying – of earlier access to “eternal life” in return for large payments known as indulgences – would this be so different?
Serious regulatory problems ahead
Legal and ethical issues aside, there are other serious issues to consider.
How can dying people have confidence in the ability of a company to keep their remains intact? If the cryonic company were to cease operating because of financial difficulties, what would happen to the frozen body?
The article is here.
Sunday, December 11, 2016
Do Unto Others ? Methodological Advance and Self- Versus Other-Attentive Resistance in Milgram’s “Obedience” Experiments
Matthew M. Hollander and Douglas W. Maynard
Social Psychology Quarterly August 2, 2016
Abstract
We introduce conversation analysis (CA) as a methodological innovation that contributes to studies of the classic Milgram experiment, one allowing for substantive advances in the social psychological “obedience to authority” paradigm. Data are 117 audio recordings of Milgram’s original experimental sessions. We discuss methodological features of CA and then show how CA allows for methodological advances in understanding the Milgramesque situation by treating it as a three-party interactional scene, explicating an interactional dilemma for the “Teacher” subjects, and decomposing categorical outcomes (obedience vs. defiance) into their concrete interactional routes. Substantively, we analyze two kinds of resistance to directives enacted by both obedient and defiant participants, who may orient to how continuation would be troublesome primarily for themselves (self-attentive resistance) or for the person receiving shocks (other-attentive resistance). Additionally, we find that defiant participants mobilize two other-attentive practices almost never used by obedient ones: Golden Rule accounts and “letting the Learner decide.”
The article is here.
Social Psychology Quarterly August 2, 2016
Abstract
We introduce conversation analysis (CA) as a methodological innovation that contributes to studies of the classic Milgram experiment, one allowing for substantive advances in the social psychological “obedience to authority” paradigm. Data are 117 audio recordings of Milgram’s original experimental sessions. We discuss methodological features of CA and then show how CA allows for methodological advances in understanding the Milgramesque situation by treating it as a three-party interactional scene, explicating an interactional dilemma for the “Teacher” subjects, and decomposing categorical outcomes (obedience vs. defiance) into their concrete interactional routes. Substantively, we analyze two kinds of resistance to directives enacted by both obedient and defiant participants, who may orient to how continuation would be troublesome primarily for themselves (self-attentive resistance) or for the person receiving shocks (other-attentive resistance). Additionally, we find that defiant participants mobilize two other-attentive practices almost never used by obedient ones: Golden Rule accounts and “letting the Learner decide.”
The article is here.
Saturday, December 10, 2016
Decision-making on behalf of people living with dementia: how do surrogate decision-makers decide?
Deirdre Fetherstonhaugh, Linda McAuliffe, Michael Bauer, Chris Shanley
J Med Ethics
doi:10.1136/medethics-2015-103301
Abstract
Background
For people living with dementia, the capacity to make important decisions about themselves diminishes as their condition advances. As a result, important decisions (affecting lifestyle, medical treatment and end of life) become the responsibility of someone else, as the surrogate decision-maker. This study investigated how surrogate decision-makers make important decisions on behalf of a person living with dementia.
Methods
Semi-structured interviews were conducted with 34 family members who had formally or informally taken on the role of surrogate decision-maker. Thematic analysis of interviews was undertaken, which involved identifying, analysing and reporting themes arising from the data.
Results
Analysis revealed three main themes associated with the process of surrogate decision-making in dementia: knowing the person's wishes; consulting with others and striking a balance. Most participants reported that there was not an advance care plan in place for the person living with dementia. Even when the prior wishes of the person with dementia were known, the process of decision-making was often fraught with complexity.
Discussion
Surrogate decision-making on behalf of a person living with dementia is often a difficult process. Advance care planning can play an important role in supporting this process. Healthcare professionals can recognise the challenges that surrogate decision-makers face and support them through advance care planning in a way that suits their needs and circumstances.
The article is here.
J Med Ethics
doi:10.1136/medethics-2015-103301
Abstract
Background
For people living with dementia, the capacity to make important decisions about themselves diminishes as their condition advances. As a result, important decisions (affecting lifestyle, medical treatment and end of life) become the responsibility of someone else, as the surrogate decision-maker. This study investigated how surrogate decision-makers make important decisions on behalf of a person living with dementia.
Methods
Semi-structured interviews were conducted with 34 family members who had formally or informally taken on the role of surrogate decision-maker. Thematic analysis of interviews was undertaken, which involved identifying, analysing and reporting themes arising from the data.
Results
Analysis revealed three main themes associated with the process of surrogate decision-making in dementia: knowing the person's wishes; consulting with others and striking a balance. Most participants reported that there was not an advance care plan in place for the person living with dementia. Even when the prior wishes of the person with dementia were known, the process of decision-making was often fraught with complexity.
Discussion
Surrogate decision-making on behalf of a person living with dementia is often a difficult process. Advance care planning can play an important role in supporting this process. Healthcare professionals can recognise the challenges that surrogate decision-makers face and support them through advance care planning in a way that suits their needs and circumstances.
The article is here.
Friday, December 9, 2016
The Case Against Reality
Amanda Gefter
The Atlantic
Originally posted April 22, 2016
Here is an excerpt:
The true reality might be forever beyond our reach, but surely our senses give us at least an inkling of what it’s really like.
Not so, says Donald D. Hoffman, a professor of cognitive science at the University of California, Irvine. Hoffman has spent the past three decades studying perception, artificial intelligence, evolutionary game theory and the brain, and his conclusion is a dramatic one: The world presented to us by our perceptions is nothing like reality. What’s more, he says, we have evolution itself to thank for this magnificent illusion, as it maximizes evolutionary fitness by driving truth to extinction.
Getting at questions about the nature of reality, and disentangling the observer from the observed, is an endeavor that straddles the boundaries of neuroscience and fundamental physics. On one side you’ll find researchers scratching their chins raw trying to understand how a three-pound lump of gray matter obeying nothing more than the ordinary laws of physics can give rise to first-person conscious experience. This is the aptly named “hard problem.”
The article is here.
The Atlantic
Originally posted April 22, 2016
Here is an excerpt:
The true reality might be forever beyond our reach, but surely our senses give us at least an inkling of what it’s really like.
Not so, says Donald D. Hoffman, a professor of cognitive science at the University of California, Irvine. Hoffman has spent the past three decades studying perception, artificial intelligence, evolutionary game theory and the brain, and his conclusion is a dramatic one: The world presented to us by our perceptions is nothing like reality. What’s more, he says, we have evolution itself to thank for this magnificent illusion, as it maximizes evolutionary fitness by driving truth to extinction.
Getting at questions about the nature of reality, and disentangling the observer from the observed, is an endeavor that straddles the boundaries of neuroscience and fundamental physics. On one side you’ll find researchers scratching their chins raw trying to understand how a three-pound lump of gray matter obeying nothing more than the ordinary laws of physics can give rise to first-person conscious experience. This is the aptly named “hard problem.”
The article is here.
Moral neuroenhancement
Earp, B. D., Douglas, T., & Savulescu, J. (forthcoming). Moral neuroenhancement. In S. Johnson & K. Rommelfanger (eds.), Routledge Handbook of Neuroethics. New York: Routledge.
Abstract
In this chapter, we introduce the notion of moral neuroenhancement, offering a novel definition as well as spelling out three conditions under which we expect that such neuroenhancement would be most likely to be permissible (or even desirable). Furthermore, we draw a distinction between first-order moral capacities, which we suggest are less promising targets for neurointervention, and second-order moral capacities, which we suggest are more promising. We conclude by discussing concerns that moral neuroenhancement might restrict freedom or otherwise misfire, and argue that these concerns are not as damning as they may seem at first.
The book chapter is here.
Abstract
In this chapter, we introduce the notion of moral neuroenhancement, offering a novel definition as well as spelling out three conditions under which we expect that such neuroenhancement would be most likely to be permissible (or even desirable). Furthermore, we draw a distinction between first-order moral capacities, which we suggest are less promising targets for neurointervention, and second-order moral capacities, which we suggest are more promising. We conclude by discussing concerns that moral neuroenhancement might restrict freedom or otherwise misfire, and argue that these concerns are not as damning as they may seem at first.
The book chapter is here.
Thursday, December 8, 2016
Crowdfunding FOR MEDICAL CARE: Ethical Issues in an Emerging Health Care Funding Practice
Jeremy Snyder
The Hastings Center Report
November 22, 2016
Abstract
Crowdfunding websites allow users to post a public appeal for funding for a range of activities, including adoption, travel, research, participation in sports, and many others. One common form of crowdfunding is for expenses related to medical care. Medical crowdfunding appeals serve as a means of addressing gaps in medical and employment insurance, both in countries without universal health insurance, like the United States, and countries with universal coverage limited to essential medical needs, like Canada. For example, as of 2012, the website Gofundme had been used to raise a total of 8.8 million dollars (U.S.) for seventy-six hundred campaigns, the majority of which were health related. This money can make an important difference in the lives of crowdfunding users, as the costs of unexpected or uninsured medical needs can be staggering. In this article, I offer an overview of the benefits of medical crowdfunding websites and the ethical concerns they raise. I argue that medical crowdfunding is a symptom and cause of, rather than a solution to, health system injustices and that policy-makers should work to address the injustices motivating the use of crowdfunding sites for essential medical services. Despite the sites’ ethical problems, individual users and donors need not refrain from using them, but they bear a political responsibility to address the inequities encouraged by these sites. I conclude by suggesting some responses to these concerns and future directions for research.
The article is here.
The Hastings Center Report
November 22, 2016
Abstract
Crowdfunding websites allow users to post a public appeal for funding for a range of activities, including adoption, travel, research, participation in sports, and many others. One common form of crowdfunding is for expenses related to medical care. Medical crowdfunding appeals serve as a means of addressing gaps in medical and employment insurance, both in countries without universal health insurance, like the United States, and countries with universal coverage limited to essential medical needs, like Canada. For example, as of 2012, the website Gofundme had been used to raise a total of 8.8 million dollars (U.S.) for seventy-six hundred campaigns, the majority of which were health related. This money can make an important difference in the lives of crowdfunding users, as the costs of unexpected or uninsured medical needs can be staggering. In this article, I offer an overview of the benefits of medical crowdfunding websites and the ethical concerns they raise. I argue that medical crowdfunding is a symptom and cause of, rather than a solution to, health system injustices and that policy-makers should work to address the injustices motivating the use of crowdfunding sites for essential medical services. Despite the sites’ ethical problems, individual users and donors need not refrain from using them, but they bear a political responsibility to address the inequities encouraged by these sites. I conclude by suggesting some responses to these concerns and future directions for research.
The article is here.
Morality in transportation
Jeffrey C. Peters
The Conversation by way of Salon
Originally posted November 19, 2016
A common fantasy for transportation enthusiasts and technology optimists is for self-driving cars and trucks to form the basis of a safe, streamlined, almost choreographed dance. In this dream, every vehicle — and cyclist and pedestrian — proceeds unimpeded on any route, as the rest of the traffic skillfully avoids collisions and even eliminates stop-and-go traffic. It’s a lot like the synchronized traffic chaos in “Rush Hour,” a short movie by Black Sheep Films.
Today, autonomous cars are becoming more common, but safety is still a question. More than 30,000 people die on U.S. roads every year — nearly 100 a day. That’s despite the best efforts of government regulators, car manufacturers and human drivers alike. Early statistics from autonomous driving suggest that widespread automation could drive the death toll down significantly.
There’s a key problem, though: Computers like rules — solid, hard-and-fast instructions to follow. How should we program them to handle difficult situations? The hypotheticals are countless: What if the car has to choose between hitting one cyclist or five pedestrians? What if the car must decide to crash into a wall and kill its occupant, or slam through a group of kindergartners? How do we decide? Who does the deciding?
The article is here.
The Conversation by way of Salon
Originally posted November 19, 2016
A common fantasy for transportation enthusiasts and technology optimists is for self-driving cars and trucks to form the basis of a safe, streamlined, almost choreographed dance. In this dream, every vehicle — and cyclist and pedestrian — proceeds unimpeded on any route, as the rest of the traffic skillfully avoids collisions and even eliminates stop-and-go traffic. It’s a lot like the synchronized traffic chaos in “Rush Hour,” a short movie by Black Sheep Films.
Today, autonomous cars are becoming more common, but safety is still a question. More than 30,000 people die on U.S. roads every year — nearly 100 a day. That’s despite the best efforts of government regulators, car manufacturers and human drivers alike. Early statistics from autonomous driving suggest that widespread automation could drive the death toll down significantly.
There’s a key problem, though: Computers like rules — solid, hard-and-fast instructions to follow. How should we program them to handle difficult situations? The hypotheticals are countless: What if the car has to choose between hitting one cyclist or five pedestrians? What if the car must decide to crash into a wall and kill its occupant, or slam through a group of kindergartners? How do we decide? Who does the deciding?
The article is here.
Wednesday, December 7, 2016
Do conservatives value ‘moral purity’ more than liberals?
Kate Johnson and Joe Hoover
The Conversation
Originally posted November 21, 2016
Here is an excerpt:
Our results were remarkably consistent with our first study. When people thought the person they were being partnered with did not share their purity concerns, they tended to avoid them. And, when people thought their partner did share their purity concerns, they wanted to associate with them.
As on Twitter, people were much more likely to associate with the other person when they had similar response to the moral purity scenarios and to avoid them when they had dissimilar response. And this pattern of responding was much stronger for purity concerns than similarities or differences for any other moral concerns, regardless of people’s religious and political affiliation and the religious and political affiliation they attributed to their partner.
There are many examples of how moral purity concerns are woven deeply into the fabric of social life. For example, have you noticed that when we derogate another person or social group we often rely on adjectives like “dirty,” and “disgusting”? Whether we are talking about “dirty hippies” or an entire class of “untouchables” or “deplorables,” we tend to signal inferiority and separation through moral terms grounded in notions of bodily and spiritual purity.
The article is here.
The Conversation
Originally posted November 21, 2016
Here is an excerpt:
Our results were remarkably consistent with our first study. When people thought the person they were being partnered with did not share their purity concerns, they tended to avoid them. And, when people thought their partner did share their purity concerns, they wanted to associate with them.
As on Twitter, people were much more likely to associate with the other person when they had similar response to the moral purity scenarios and to avoid them when they had dissimilar response. And this pattern of responding was much stronger for purity concerns than similarities or differences for any other moral concerns, regardless of people’s religious and political affiliation and the religious and political affiliation they attributed to their partner.
There are many examples of how moral purity concerns are woven deeply into the fabric of social life. For example, have you noticed that when we derogate another person or social group we often rely on adjectives like “dirty,” and “disgusting”? Whether we are talking about “dirty hippies” or an entire class of “untouchables” or “deplorables,” we tend to signal inferiority and separation through moral terms grounded in notions of bodily and spiritual purity.
The article is here.
Moralized Rationality: Relying on Logic and Evidence in the Formation and Evaluation of Belief Can Be Seen as a Moral Issue
Tomas Ståhl, Maarten P. Zaal, Linda J. Skitka
PLOS One
Published: November 16, 2016
Abstract
In the present article we demonstrate stable individual differences in the extent to which a reliance on logic and evidence in the formation and evaluation of beliefs is perceived as a moral virtue, and a reliance on less rational processes is perceived as a vice. We refer to this individual difference variable as moralized rationality. Eight studies are reported in which an instrument to measure individual differences in moralized rationality is validated. Results show that the Moralized Rationality Scale (MRS) is internally consistent, and captures something distinct from the personal importance people attach to being rational (Studies 1–3). Furthermore, the MRS has high test-retest reliability (Study 4), is conceptually distinct from frequently used measures of individual differences in moral values, and it is negatively related to common beliefs that are not supported by scientific evidence (Study 5). We further demonstrate that the MRS predicts morally laden reactions, such as a desire for punishment, of people who rely on irrational (vs. rational) ways of forming and evaluating beliefs (Studies 6 and 7). Finally, we show that the MRS uniquely predicts motivation to contribute to a charity that works to prevent the spread of irrational beliefs (Study 8). We conclude that (1) there are stable individual differences in the extent to which people moralize a reliance on rationality in the formation and evaluation of beliefs, (2) that these individual differences do not reduce to the personal importance attached to rationality, and (3) that individual differences in moralized rationality have important motivational and interpersonal consequences.
Tuesday, December 6, 2016
Living with the animals: animal or robotic companions for the elderly in smart homes?
Dirk Preuß and Friederike Legal
J Med Ethics doi:10.1136/medethics-2016-103603
Abstract
Although the use of pet robots in senior living facilities and day-care centres, particularly for individuals suffering from dementia, has been intensively researched, the question of introducing pet robots into domestic settings has been relatively neglected. Ambient assisted living (AAL) offers many interface opportunities for integrating motorised companions. There are diverse medical reasons, as well as arguments from animal ethics, that support the use of pet robots in contrast to living with live animals. However, as this paper makes clear, we should not lose sight of the option of living with animals at home for as long as possible and in conformity with the welfare of the animal assisted by AAL technology.
The article is here.
J Med Ethics doi:10.1136/medethics-2016-103603
Abstract
Although the use of pet robots in senior living facilities and day-care centres, particularly for individuals suffering from dementia, has been intensively researched, the question of introducing pet robots into domestic settings has been relatively neglected. Ambient assisted living (AAL) offers many interface opportunities for integrating motorised companions. There are diverse medical reasons, as well as arguments from animal ethics, that support the use of pet robots in contrast to living with live animals. However, as this paper makes clear, we should not lose sight of the option of living with animals at home for as long as possible and in conformity with the welfare of the animal assisted by AAL technology.
The article is here.
Perceiving the World Through Group-Colored Glasses: A Perceptual Model of Intergroup Relations
Y. Jenny Xiao, Géraldine Coppin, and Jay J. Van Bavel
Psychological Inquiry Vol. 27 , Iss. 4, 2016
Abstract
Extensive research has investigated societal and behavioral consequences of social group affiliation and identification but has been relatively silent on the role of perception in intergroup relations. We propose the perceptual model of intergroup relations to conceptualize how intergroup relations are grounded in perception. We review the growing literature on how intergroup dynamics shape perception across different sensory modalities and argue that these perceptual processes mediate intergroup relations. The model provides a starting point for social psychologists to study perception as a function of social group dynamics and for perception researchers to consider social influences. We highlight several gaps in the literature and outline areas for future research. Uncovering the role of perception in intergroup relations offers novel insights into the construction of shared reality and may help devise new and unique interventions targeted at the perceptual level.
The article is here.
Psychological Inquiry Vol. 27 , Iss. 4, 2016
Abstract
Extensive research has investigated societal and behavioral consequences of social group affiliation and identification but has been relatively silent on the role of perception in intergroup relations. We propose the perceptual model of intergroup relations to conceptualize how intergroup relations are grounded in perception. We review the growing literature on how intergroup dynamics shape perception across different sensory modalities and argue that these perceptual processes mediate intergroup relations. The model provides a starting point for social psychologists to study perception as a function of social group dynamics and for perception researchers to consider social influences. We highlight several gaps in the literature and outline areas for future research. Uncovering the role of perception in intergroup relations offers novel insights into the construction of shared reality and may help devise new and unique interventions targeted at the perceptual level.
The article is here.
Monday, December 5, 2016
Why Some People Get Burned Out and Others Don't
Kandi Wiens and Annie McKee
Harvard Business Review
Originally posted November 23, 2016
Here is an excerpt:
What You Can Do to Manage Stress and Avoid Burnout
People do all kinds of destructive things to deal with stress—they overeat, abuse drugs and alcohol, and push harder rather than slowing down. What we learned from our study of chief medical officers is that people can leverage their emotional intelligence to deal with stress and ward off burnout. You, too, might want to try the following:
Don’t be the source of your stress. Too many of us create our own stress, with its full bodily response, merely by thinking about or anticipating future episodes or encounters that might be stressful. People who have a high need to achieve or perfectionist tendencies may be more prone to creating their own stress. We learned from our study that leaders who are attuned to the pressures they put on themselves are better able to control their stress level. As one CMO described, “I’ve realized that much of my stress is self-inflicted from years of being hard on myself. Now that I know the problems it causes for me, I can talk myself out of the non-stop pressure.”
Recognize your limitations. Becoming more aware of your strengths and weaknesses will clue you in to where you need help. In our study, CMOs described the transition from a clinician to leadership role as being a major source of their stress. Those who recognized when the demands were outweighing their abilities, didn’t go it alone—they surrounded themselves with trusted advisors and asked for help.
The article is here.
Harvard Business Review
Originally posted November 23, 2016
Here is an excerpt:
What You Can Do to Manage Stress and Avoid Burnout
People do all kinds of destructive things to deal with stress—they overeat, abuse drugs and alcohol, and push harder rather than slowing down. What we learned from our study of chief medical officers is that people can leverage their emotional intelligence to deal with stress and ward off burnout. You, too, might want to try the following:
Don’t be the source of your stress. Too many of us create our own stress, with its full bodily response, merely by thinking about or anticipating future episodes or encounters that might be stressful. People who have a high need to achieve or perfectionist tendencies may be more prone to creating their own stress. We learned from our study that leaders who are attuned to the pressures they put on themselves are better able to control their stress level. As one CMO described, “I’ve realized that much of my stress is self-inflicted from years of being hard on myself. Now that I know the problems it causes for me, I can talk myself out of the non-stop pressure.”
Recognize your limitations. Becoming more aware of your strengths and weaknesses will clue you in to where you need help. In our study, CMOs described the transition from a clinician to leadership role as being a major source of their stress. Those who recognized when the demands were outweighing their abilities, didn’t go it alone—they surrounded themselves with trusted advisors and asked for help.
The article is here.
The Simple Economics of Machine Intelligence
Ajay Agrawal, Joshua Gans, and Avi Goldfarb
Harvard Business Review
Originally published November 17, 2016
Here are two excerpts:
The first effect of machine intelligence will be to lower the cost of goods and services that rely on prediction. This matters because prediction is an input to a host of activities including transportation, agriculture, healthcare, energy manufacturing, and retail.
When the cost of any input falls so precipitously, there are two other well-established economic implications. First, we will start using prediction to perform tasks where we previously didn’t. Second, the value of other things that complement prediction will rise.
(cut)
As machine intelligence improves, the value of human prediction skills will decrease because machine prediction will provide a cheaper and better substitute for human prediction, just as machines did for arithmetic. However, this does not spell doom for human jobs, as many experts suggest. That’s because the value of human judgment skills will increase. Using the language of economics, judgment is a complement to prediction and therefore when the cost of prediction falls demand for judgment rises. We’ll want more human judgment.
The article is here.
Harvard Business Review
Originally published November 17, 2016
Here are two excerpts:
The first effect of machine intelligence will be to lower the cost of goods and services that rely on prediction. This matters because prediction is an input to a host of activities including transportation, agriculture, healthcare, energy manufacturing, and retail.
When the cost of any input falls so precipitously, there are two other well-established economic implications. First, we will start using prediction to perform tasks where we previously didn’t. Second, the value of other things that complement prediction will rise.
(cut)
As machine intelligence improves, the value of human prediction skills will decrease because machine prediction will provide a cheaper and better substitute for human prediction, just as machines did for arithmetic. However, this does not spell doom for human jobs, as many experts suggest. That’s because the value of human judgment skills will increase. Using the language of economics, judgment is a complement to prediction and therefore when the cost of prediction falls demand for judgment rises. We’ll want more human judgment.
The article is here.
Sunday, December 4, 2016
Do It Well and Do It Right: The Impact of Service Climate and Ethical Climate on Business Performance and the Boundary Conditions
Kaifeng Jiang, Jia Hu, Ying Hong, Hui Liao, & Songbo Liu
Journal of Applied Psychology
Vol 101(11), Nov 2016, 1553-1568.
Abstract
Prior research has demonstrated that service climate can enhance unit performance by guiding employees’ service behavior to satisfy customers. Extending this literature, we identified ethical climate toward customers as another indispensable organizational climate in service contexts and examined how and when service climate operates in conjunction with ethical climate to enhance business performance of service units. Based on data collected in 2 phases over 6 months from multiple sources of 196 movie theaters, we found that service climate and ethical climate had disparate impacts on business performance, operationalized as an index of customer attendance rate and operating income per labor hour, by enhancing service behavior and reducing unethical behavior, respectively. Furthermore, we found that service behavior and unethical behavior interacted to affect business performance, in such a way that service behavior was more positively related to business performance when unethical behavior was low than when it was high. This interactive effect between service and unethical behaviors was further strengthened by high market turbulence and competitive intensity. These findings provide new insight into theoretical development of service management and offer practical implications about how to maximize business performance of service units by managing organizational climates and employee behaviors synergistically.
(cut)
In conclusion, service excellence has become a strategic imperative for service organizations, and prior research has established an unequivocal picture of the value in building a service climate that guides employees to satisfy customers and generate value. Our findings suggest another indispensable and complementary route to service success: in addition to emphasizing service excellence, organizations should highlight high ethical standards to uniquely inhibit unethical behavior. Additionally, both excellent service behavior and adherence to ethics functioned synergistically. Last, our results showed that the synergy between service and ethical behavior was most salient when the market was turbulent or competitive.
The article is here.
Journal of Applied Psychology
Vol 101(11), Nov 2016, 1553-1568.
Abstract
Prior research has demonstrated that service climate can enhance unit performance by guiding employees’ service behavior to satisfy customers. Extending this literature, we identified ethical climate toward customers as another indispensable organizational climate in service contexts and examined how and when service climate operates in conjunction with ethical climate to enhance business performance of service units. Based on data collected in 2 phases over 6 months from multiple sources of 196 movie theaters, we found that service climate and ethical climate had disparate impacts on business performance, operationalized as an index of customer attendance rate and operating income per labor hour, by enhancing service behavior and reducing unethical behavior, respectively. Furthermore, we found that service behavior and unethical behavior interacted to affect business performance, in such a way that service behavior was more positively related to business performance when unethical behavior was low than when it was high. This interactive effect between service and unethical behaviors was further strengthened by high market turbulence and competitive intensity. These findings provide new insight into theoretical development of service management and offer practical implications about how to maximize business performance of service units by managing organizational climates and employee behaviors synergistically.
(cut)
In conclusion, service excellence has become a strategic imperative for service organizations, and prior research has established an unequivocal picture of the value in building a service climate that guides employees to satisfy customers and generate value. Our findings suggest another indispensable and complementary route to service success: in addition to emphasizing service excellence, organizations should highlight high ethical standards to uniquely inhibit unethical behavior. Additionally, both excellent service behavior and adherence to ethics functioned synergistically. Last, our results showed that the synergy between service and ethical behavior was most salient when the market was turbulent or competitive.
The article is here.
Saturday, December 3, 2016
Data Ethics: The New Competitive Advantage
Gry Hasselbalch
Tech Crunch
Originally posted November 14, 2016
Here is an excerpt:
What is data ethics?
Ethical companies in today’s big data era are doing more than just complying with data protection legislation. They also follow the spirit and vision of the legislation by listening closely to their customers. They’re implementing credible and clear transparency policies for data management. They’re only processing necessary data and developing privacy-aware corporate cultures and organizational structures. Some are developing products and services using Privacy by Design.
A data-ethical company sustains ethical values relating to data, asking: Is this something I myself would accept as a consumer? Is this something I want my children to grow up with? A company’s degree of “data ethics awareness” is not only crucial for survival in a market where consumers progressively set the bar, it’s also necessary for society as a whole. It plays a similar role as a company’s environmental conscience — essential for company survival, but also for the planet’s welfare. Yet there isn’t a one-size-fits-all solution, perfect for every ethical dilemma. We’re in an age of experimentation where laws, technology and, perhaps most importantly, our limits as individuals are tested and negotiated on a daily basis.
The article is here.
Tech Crunch
Originally posted November 14, 2016
Here is an excerpt:
What is data ethics?
Ethical companies in today’s big data era are doing more than just complying with data protection legislation. They also follow the spirit and vision of the legislation by listening closely to their customers. They’re implementing credible and clear transparency policies for data management. They’re only processing necessary data and developing privacy-aware corporate cultures and organizational structures. Some are developing products and services using Privacy by Design.
A data-ethical company sustains ethical values relating to data, asking: Is this something I myself would accept as a consumer? Is this something I want my children to grow up with? A company’s degree of “data ethics awareness” is not only crucial for survival in a market where consumers progressively set the bar, it’s also necessary for society as a whole. It plays a similar role as a company’s environmental conscience — essential for company survival, but also for the planet’s welfare. Yet there isn’t a one-size-fits-all solution, perfect for every ethical dilemma. We’re in an age of experimentation where laws, technology and, perhaps most importantly, our limits as individuals are tested and negotiated on a daily basis.
The article is here.
Friday, December 2, 2016
New ruling finally requires homeopathic 'treatments' to obey the same labeling standards as real medicines
Lindsay Dodgson
Business Insider
Originally posted November 17, 2016
The Federal Trade Commission issued a statement this month which said that homeopathic remedies have to be held to the same standard as other products that make similar claims. In other words, American companies must now have reliable scientific evidence for health-related claims that their products can treat specific conditions and illnesses.
The article is here.
The Federal Trade Commission (FTC) ruling is here.
Business Insider
Originally posted November 17, 2016
The Federal Trade Commission issued a statement this month which said that homeopathic remedies have to be held to the same standard as other products that make similar claims. In other words, American companies must now have reliable scientific evidence for health-related claims that their products can treat specific conditions and illnesses.
The article is here.
The Federal Trade Commission (FTC) ruling is here.
An Improved Virtual Hope Box: An App for Suicidal Patients
Principal Investigator: Nigel Bush, Ph.D.
Organization: National Center for Telehealth & Technology
One of the key approaches in treating people who are depressed and thinking about suicide is to help them come up with reasons to go on living, and one of the ways that mental health specialists have traditionally done this is to work with their patients to create a “hope box”—a collection of various items that remind the patients that their lives are meaningful and worth living. The items can be anything from photos of loved ones and certificates of past achievements to lists of future aspirations, CDs of relaxing music, and recordings of loved ones offering inspirations thoughts. The hope box itself can take various forms: a real wooden box or shoe box, a manila envelope, a plastic bag, or anything else that the patient chooses. The patient is asked to keep the hope box nearby and use its contents when it seems hard to go on living.
But it is not always easy to keep such a hope box close at hand. A depressed Veteran or service member might find it inconvenient to take the hope box to work, for example, or might forget to bring it along on a trip. For this reason Nigel Bush and his colleagues at the National Center for Telehealth and Technology have designed a “virtual hope box,” a smartphone app that allows the patient to keep all those reasons for living close by at all times.
The entire app description is here.
Organization: National Center for Telehealth & Technology
One of the key approaches in treating people who are depressed and thinking about suicide is to help them come up with reasons to go on living, and one of the ways that mental health specialists have traditionally done this is to work with their patients to create a “hope box”—a collection of various items that remind the patients that their lives are meaningful and worth living. The items can be anything from photos of loved ones and certificates of past achievements to lists of future aspirations, CDs of relaxing music, and recordings of loved ones offering inspirations thoughts. The hope box itself can take various forms: a real wooden box or shoe box, a manila envelope, a plastic bag, or anything else that the patient chooses. The patient is asked to keep the hope box nearby and use its contents when it seems hard to go on living.
But it is not always easy to keep such a hope box close at hand. A depressed Veteran or service member might find it inconvenient to take the hope box to work, for example, or might forget to bring it along on a trip. For this reason Nigel Bush and his colleagues at the National Center for Telehealth and Technology have designed a “virtual hope box,” a smartphone app that allows the patient to keep all those reasons for living close by at all times.
The entire app description is here.
Thursday, December 1, 2016
Episode 25: The Assessment, Management, and Treatment of Suicidal Patients
Suicide is the 10th leading cause of death in the United States and the most frequent crisis encountered by mental health professionals. This video/podcast reviews basic information about the assessment, management, and treatment of patients at risk to die from suicide. It fulfills Act 74 requirements for Pennsylvania licensed psychologists, social workers, marriage and family therapists, and professional counselors.
Program Learning Objectives:
At the end of this program the participants will learn basic information that will help them to
Video
Resources
Bongar, B., & Sullivan, G. (2013). The suicidal patient: Clinical and legal standards of care. (3rd ed.). Washington, DC: American Psychological Association.
Bryan, C. J. (2015). Cognitive behavior strategies for preventing suicidal attempts. NY: Routledge.
Jamison, K. R. (2000). Night Falls Fast: Understanding suicide. New York: Random House.
Jobes, D. (2016). Managing suicide risk (2nd Ed.). NY: Guilford.
Joiner, T. (2005). The myths of suicide. Cambridge, MA: Harvard University Press.
McKeon, R. (2009). Suicidal behavior. Cambridge, MA: Hogrefe & Huber.
Disclaimer
As an educational program, this podcast/video does not purport to provide clinical or legal advice on any particular patient. Listeners or viewers with concerns about the assessment, management, or treatment of any patient are urged to seek clinical or legal advice. Also, individual psychotherapists need to use their clinical judgment with their patients and incorporate procedures or techniques not covered in this podcast/video, or modify or omit certain recommendations herein because of the unique needs of their patients.
This one-hour video/podcast provides a basic introduction to the assessment, management, and treatment of patients at risk to die from a suicide attempt. This podcast/video may be a useful refresher course for experienced clinicians. However, listeners/viewers should not assume that the completion of this course will, in and of itself, make them qualified to assess or treat individuals who are at risk to die from suicide. For those who do not have formal training in suicide, this podcast/video should be seen as providing an introduction or exposure to the professional literature on this topic.
Proficiency in dealing with suicidal patients, like proficiency in other areas of professional practice, is best achieved through an organized sequence of study including mastery of a basic foundation of knowledge and attitudes, and supervision. It is impossible to give a fixed number of hours of continuing education and supervision that professionals need to have before they can be considered proficient in assessing, managing, and treating suicidal patients. Much depends on their existing knowledge base and overall level of clinical skill. It would be indicated to look at competency standards from noted authorities, such as those developed by the American Association of Suicidology ( http://www.sprc.org/training-events/amsr), by David Rudd and his associates (Rudd et al., 2008), or Cramer et al. (2014).
After you review the material, click here to link to CE credit.
Click here for slides related to the podcast.
Program Learning Objectives:
At the end of this program the participants will learn basic information that will help them to
- Assess patients who are at risk to die from a suicide attempt;
- Manage the risks of suicide; and
- Treat patients who are at risk to die from a suicide attempt.
Video
Resources
Bongar, B., & Sullivan, G. (2013). The suicidal patient: Clinical and legal standards of care. (3rd ed.). Washington, DC: American Psychological Association.
Bryan, C. J. (2015). Cognitive behavior strategies for preventing suicidal attempts. NY: Routledge.
Jamison, K. R. (2000). Night Falls Fast: Understanding suicide. New York: Random House.
Jobes, D. (2016). Managing suicide risk (2nd Ed.). NY: Guilford.
Joiner, T. (2005). The myths of suicide. Cambridge, MA: Harvard University Press.
McKeon, R. (2009). Suicidal behavior. Cambridge, MA: Hogrefe & Huber.
Disclaimer
As an educational program, this podcast/video does not purport to provide clinical or legal advice on any particular patient. Listeners or viewers with concerns about the assessment, management, or treatment of any patient are urged to seek clinical or legal advice. Also, individual psychotherapists need to use their clinical judgment with their patients and incorporate procedures or techniques not covered in this podcast/video, or modify or omit certain recommendations herein because of the unique needs of their patients.
This one-hour video/podcast provides a basic introduction to the assessment, management, and treatment of patients at risk to die from a suicide attempt. This podcast/video may be a useful refresher course for experienced clinicians. However, listeners/viewers should not assume that the completion of this course will, in and of itself, make them qualified to assess or treat individuals who are at risk to die from suicide. For those who do not have formal training in suicide, this podcast/video should be seen as providing an introduction or exposure to the professional literature on this topic.
Proficiency in dealing with suicidal patients, like proficiency in other areas of professional practice, is best achieved through an organized sequence of study including mastery of a basic foundation of knowledge and attitudes, and supervision. It is impossible to give a fixed number of hours of continuing education and supervision that professionals need to have before they can be considered proficient in assessing, managing, and treating suicidal patients. Much depends on their existing knowledge base and overall level of clinical skill. It would be indicated to look at competency standards from noted authorities, such as those developed by the American Association of Suicidology ( http://www.sprc.org/training-events/amsr), by David Rudd and his associates (Rudd et al., 2008), or Cramer et al. (2014).
After you review the material, click here to link to CE credit.
Click here for slides related to the podcast.
Wednesday, November 30, 2016
Human brain is predisposed to negative stereotypes, new study suggests
Hannah Devlin
The Guardian
Originally posted November 1, 2016
The human brain is predisposed to learn negative stereotypes, according to research that offers clues as to how prejudice emerges and spreads through society.
The study found that the brain responds more strongly to information about groups who are portrayed unfavourably, adding weight to the view that the negative depiction of ethnic or religious minorities in the media can fuel racial bias.
Hugo Spiers, a neuroscientist at University College London, who led the research, said: “The newspapers are filled with ghastly things people do ... You’re getting all these news stories and the negative ones stand out. When you look at Islam, for example, there’s so many more negative stories than positive ones and that will build up over time.”
The article is here.
The Guardian
Originally posted November 1, 2016
The human brain is predisposed to learn negative stereotypes, according to research that offers clues as to how prejudice emerges and spreads through society.
The study found that the brain responds more strongly to information about groups who are portrayed unfavourably, adding weight to the view that the negative depiction of ethnic or religious minorities in the media can fuel racial bias.
Hugo Spiers, a neuroscientist at University College London, who led the research, said: “The newspapers are filled with ghastly things people do ... You’re getting all these news stories and the negative ones stand out. When you look at Islam, for example, there’s so many more negative stories than positive ones and that will build up over time.”
The article is here.
Can Robots Make Moral Decisions? Should They?
Joelle Renstrom
The Daily Beast
Originally published November 12, 2016
Here is an excerpt:
Whether it’s possible to program a robot with safeguards such as Asimov’s laws is debatable. A word such as “harm” is vague (what about emotional harm? Is replacing a human employee harm?), and abstract concepts present coding problems. The robots in Asimov’s fiction expose complications and loopholes in the three laws, and even when the laws work, robots still have to assess situations.
Assessing situations can be complicated. A robot has to identify the players, conditions, and possible outcomes for various scenarios. It’s doubtful than an algorithm can do that—at least, not without some undesirable results. A roboticist at the Bristol Robotics Laboratory programmed a robot to save human proxies called “H-bots” from danger. When one H-bot headed for danger, the robot successfully pushed it out of the way. But when two H-bots became imperiled, the robot choked 42 percent of the time, unable to decide which to save and letting them both “die.” The experiment highlights the importance of morality: without it, how can a robot decide whom to save or what’s best for humanity, especially if it can’t calculate survival odds?
The article is here.
The Daily Beast
Originally published November 12, 2016
Here is an excerpt:
Whether it’s possible to program a robot with safeguards such as Asimov’s laws is debatable. A word such as “harm” is vague (what about emotional harm? Is replacing a human employee harm?), and abstract concepts present coding problems. The robots in Asimov’s fiction expose complications and loopholes in the three laws, and even when the laws work, robots still have to assess situations.
Assessing situations can be complicated. A robot has to identify the players, conditions, and possible outcomes for various scenarios. It’s doubtful than an algorithm can do that—at least, not without some undesirable results. A roboticist at the Bristol Robotics Laboratory programmed a robot to save human proxies called “H-bots” from danger. When one H-bot headed for danger, the robot successfully pushed it out of the way. But when two H-bots became imperiled, the robot choked 42 percent of the time, unable to decide which to save and letting them both “die.” The experiment highlights the importance of morality: without it, how can a robot decide whom to save or what’s best for humanity, especially if it can’t calculate survival odds?
The article is here.
Tuesday, November 29, 2016
Everyone Thinks They’re More Moral Than Everyone Else
By Cari Romm
New York Magazine - The Science of Us
Originally posted November 15, 2016
There’s been a lot of talk over the past week about the “filter bubble” — the ideological cocoon that each of us inhabits, blinding us to opposing views. As my colleague Drake wrote the day after the election, the filter bubble is why so many people were so blindsided by Donald Trump’s win: They only saw, and only read, stories assuming that it wouldn’t happen.
Our filter bubbles are defined by the people and ideas we choose to surround ourselves with, but each of us also lives in a one-person bubble of sorts, viewing the world through our own distorted sense of self. The way we view ourselves in relation to others is a constant tug-of-war between two opposing forces: On one end of the spectrum is something called illusory superiority, a psychological quirk in which we tend to assume that we’re better than average — past research has found it to be true in people estimating their own driving skills, parents’ perceived ability to catch their kid in a lie, even cancer patients’ estimates of their own prognoses. And on the other end of the spectrum, there’s “social projection,” or the assumption that other people share your abilities or beliefs.
Why does imprisoned psychologist still have license to practice?
Charles Keeshan and Susan Sarkauskas
Chicago Daily Herald
Originally published November 11, 2016
Here is an excerpt:
Federal prosecutors said Rinaldi submitted phony bills to Medicare for about $1.1 million over four years, collecting at least $447,155. In nearly a dozen instances, they said, she submitted claims indicating she had provided between 35 and 42 hours of therapy in a single day. In others, she submitted claims stating she had provided care to Chicago-area patients when she was actually in San Diego or Las Vegas.
The article is here.
Chicago Daily Herald
Originally published November 11, 2016
Here is an excerpt:
Federal prosecutors said Rinaldi submitted phony bills to Medicare for about $1.1 million over four years, collecting at least $447,155. In nearly a dozen instances, they said, she submitted claims indicating she had provided between 35 and 42 hours of therapy in a single day. In others, she submitted claims stating she had provided care to Chicago-area patients when she was actually in San Diego or Las Vegas.
The article is here.
Monday, November 28, 2016
CRISPR gene-editing tested in a person for the first time
David Cyranoski
Nature
Originally published November 16, 2016
A Chinese group has become the first to inject a person with cells that contain genes edited using the revolutionary CRISPR–Cas9 technique.
On 28 October, a team led by oncologist Lu You at Sichuan University in Chengdu delivered the modified cells into a patient with aggressive lung cancer as part of a clinical trial at the West China Hospital, also in Chengdu.
Earlier clinical trials using cells edited with a different technique have excited clinicians. The introduction of CRISPR, which is simpler and more efficient than other techniques, will probably accelerate the race to get gene-edited cells into the clinic across the world, says Carl June, who specializes in immunotherapy at the University of Pennsylvania in Philadelphia and led one of the earlier studies.
The article is here.
Nature
Originally published November 16, 2016
A Chinese group has become the first to inject a person with cells that contain genes edited using the revolutionary CRISPR–Cas9 technique.
On 28 October, a team led by oncologist Lu You at Sichuan University in Chengdu delivered the modified cells into a patient with aggressive lung cancer as part of a clinical trial at the West China Hospital, also in Chengdu.
Earlier clinical trials using cells edited with a different technique have excited clinicians. The introduction of CRISPR, which is simpler and more efficient than other techniques, will probably accelerate the race to get gene-edited cells into the clinic across the world, says Carl June, who specializes in immunotherapy at the University of Pennsylvania in Philadelphia and led one of the earlier studies.
The article is here.
Studying ethics, 'Star Trek' style, at Drake
Daniel P. Finney
The Des Moines Register
Originally posted November 10, 2016
Here is an excerpt:
Sure, the discussion was about ethics of the fictional universe of “Star Trek.” But fiction, like all art, reflects the human condition.
The issue Capt. Sisko wrestled with had parallels to the real world.
Some historians hold the controversial assertion that President Franklin D. Roosevelt knew of the impending attack on Pearl Harbor in 1941 but allowed it to happen to bring the United States into World War II, a move the public opposed before the attack.
In more recent times, former President George W. Bush’s administration used faulty intelligence suggesting Iraq possessed weapons of mass destruction to justify a war that many believed would stabilize the increasingly sectarian Middle East. It did not.
The article is here.
The Des Moines Register
Originally posted November 10, 2016
Here is an excerpt:
Sure, the discussion was about ethics of the fictional universe of “Star Trek.” But fiction, like all art, reflects the human condition.
The issue Capt. Sisko wrestled with had parallels to the real world.
Some historians hold the controversial assertion that President Franklin D. Roosevelt knew of the impending attack on Pearl Harbor in 1941 but allowed it to happen to bring the United States into World War II, a move the public opposed before the attack.
In more recent times, former President George W. Bush’s administration used faulty intelligence suggesting Iraq possessed weapons of mass destruction to justify a war that many believed would stabilize the increasingly sectarian Middle East. It did not.
The article is here.
Sunday, November 27, 2016
Approach-Induced Biases in Human Information Sampling
Laurence T. Hunt and others
PLOS Biology
Published: November 10, 2016
Abstract
Information sampling is often biased towards seeking evidence that confirms one’s prior beliefs. Despite such biases being a pervasive feature of human behavior, their underlying causes remain unclear. Many accounts of these biases appeal to limitations of human hypothesis testing and cognition, de facto evoking notions of bounded rationality, but neglect more basic aspects of behavioral control. Here, we investigated a potential role for Pavlovian approach in biasing which information humans will choose to sample. We collected a large novel dataset from 32,445 human subjects, making over 3 million decisions, who played a gambling task designed to measure the latent causes and extent of information-sampling biases. We identified three novel approach-related biases, formalized by comparing subject behavior to a dynamic programming model of optimal information gathering. These biases reflected the amount of information sampled (“positive evidence approach”), the selection of which information to sample (“sampling the favorite”), and the interaction between information sampling and subsequent choices (“rejecting unsampled options”). The prevalence of all three biases was related to a Pavlovian approach-avoid parameter quantified within an entirely independent economic decision task. Our large dataset also revealed that individual differences in the amount of information gathered are a stable trait across multiple gameplays and can be related to demographic measures, including age and educational attainment. As well as revealing limitations in cognitive processing, our findings suggest information sampling biases reflect the expression of primitive, yet potentially ecologically adaptive, behavioral repertoires. One such behavior is sampling from options that will eventually be chosen, even when other sources of information are more pertinent for guiding future action.
The article is here.
PLOS Biology
Published: November 10, 2016
Abstract
Information sampling is often biased towards seeking evidence that confirms one’s prior beliefs. Despite such biases being a pervasive feature of human behavior, their underlying causes remain unclear. Many accounts of these biases appeal to limitations of human hypothesis testing and cognition, de facto evoking notions of bounded rationality, but neglect more basic aspects of behavioral control. Here, we investigated a potential role for Pavlovian approach in biasing which information humans will choose to sample. We collected a large novel dataset from 32,445 human subjects, making over 3 million decisions, who played a gambling task designed to measure the latent causes and extent of information-sampling biases. We identified three novel approach-related biases, formalized by comparing subject behavior to a dynamic programming model of optimal information gathering. These biases reflected the amount of information sampled (“positive evidence approach”), the selection of which information to sample (“sampling the favorite”), and the interaction between information sampling and subsequent choices (“rejecting unsampled options”). The prevalence of all three biases was related to a Pavlovian approach-avoid parameter quantified within an entirely independent economic decision task. Our large dataset also revealed that individual differences in the amount of information gathered are a stable trait across multiple gameplays and can be related to demographic measures, including age and educational attainment. As well as revealing limitations in cognitive processing, our findings suggest information sampling biases reflect the expression of primitive, yet potentially ecologically adaptive, behavioral repertoires. One such behavior is sampling from options that will eventually be chosen, even when other sources of information are more pertinent for guiding future action.
The article is here.
Saturday, November 26, 2016
Harvard scientists think they've pinpointed the physical source of consciousness
Fiona McDonald
Sciencealert.com
Originally posted 8 November 2016
Here is an excerpt:
Now the Harvard team has identified not only the specific brainstem region linked to arousal, but also two cortex regions, that all appear to work together to form consciousness.
To figure this out, the team analysed 36 patients in hospital with brainstem lesions - 12 of them were in a coma (unconscious) and 24 were defined as being conscious.
The researchers then mapped their brainstems to figure out if there was one particular region that could explain why some patients had maintained consciousness despite their injuries, while others had become comatose.
What they found was one small area of the brainstem - known as the rostral dorsolateral pontine tegmentum - that was significantly associated with coma. Ten out of the 12 unconscious patients had damage in this area, while just one out of the 24 conscious patients did.
The article is here.
Sciencealert.com
Originally posted 8 November 2016
Here is an excerpt:
Now the Harvard team has identified not only the specific brainstem region linked to arousal, but also two cortex regions, that all appear to work together to form consciousness.
To figure this out, the team analysed 36 patients in hospital with brainstem lesions - 12 of them were in a coma (unconscious) and 24 were defined as being conscious.
The researchers then mapped their brainstems to figure out if there was one particular region that could explain why some patients had maintained consciousness despite their injuries, while others had become comatose.
What they found was one small area of the brainstem - known as the rostral dorsolateral pontine tegmentum - that was significantly associated with coma. Ten out of the 12 unconscious patients had damage in this area, while just one out of the 24 conscious patients did.
The article is here.
What is data ethics?
Luciano Floridi and Mariarosaria Taddeo
Philosophical Transactions Royal Society A
This theme issue has the founding ambition of landscaping data ethics as a new branch of ethics that studies and evaluates moral problems related to data (including generation, recording, curation, processing, dissemination, sharing and use), algorithms (including artificial intelligence, artificial agents, machine learning and robots) and corresponding practices (including responsible innovation, programming, hacking and professional codes), in order to formulate and support morally good solutions (e.g. right conducts or right values). Data ethics builds on the foundation provided by computer and information ethics but, at the sametime, it refines the approach endorsed so far in this research field, by shifting the level of abstraction of ethical enquiries, from being information-centric to being data-centric. This shift brings into focus the different moral dimensions of all kinds of data, even data that never translate directly into information but can be used to support actions or generate behaviours, for example. It highlights the need for ethical analyses to concentrate on the content and nature of computational operations—the interactions among hardware, software and data—rather than on the variety of digital technologies that enable them. And it emphasizes the complexity of the ethical challenges posed by data science. Because of such complexity, data ethics should be developed from the start as a macroethics, that is, as an overall framework that avoids narrow, ad hoc approaches and addresses the ethical impact and implications of data science and its applications within a consistent, holistic and inclusive framework. Only as a macroethics will data ethics provide solutions that can maximize the value of data science for our societies, for all of us and for our environments.This article is part of the themed issue ‘The ethical impact of data science’.
The article is here.
Philosophical Transactions Royal Society A
This theme issue has the founding ambition of landscaping data ethics as a new branch of ethics that studies and evaluates moral problems related to data (including generation, recording, curation, processing, dissemination, sharing and use), algorithms (including artificial intelligence, artificial agents, machine learning and robots) and corresponding practices (including responsible innovation, programming, hacking and professional codes), in order to formulate and support morally good solutions (e.g. right conducts or right values). Data ethics builds on the foundation provided by computer and information ethics but, at the sametime, it refines the approach endorsed so far in this research field, by shifting the level of abstraction of ethical enquiries, from being information-centric to being data-centric. This shift brings into focus the different moral dimensions of all kinds of data, even data that never translate directly into information but can be used to support actions or generate behaviours, for example. It highlights the need for ethical analyses to concentrate on the content and nature of computational operations—the interactions among hardware, software and data—rather than on the variety of digital technologies that enable them. And it emphasizes the complexity of the ethical challenges posed by data science. Because of such complexity, data ethics should be developed from the start as a macroethics, that is, as an overall framework that avoids narrow, ad hoc approaches and addresses the ethical impact and implications of data science and its applications within a consistent, holistic and inclusive framework. Only as a macroethics will data ethics provide solutions that can maximize the value of data science for our societies, for all of us and for our environments.This article is part of the themed issue ‘The ethical impact of data science’.
The article is here.
Friday, November 25, 2016
A New Spin on the Quantum Brain
By Jennifer Ouellette
Quanta Magazine
November 2, 2016
The mere mention of “quantum consciousness” makes most physicists cringe, as the phrase seems to evoke the vague, insipid musings of a New Age guru. But if a new hypothesis proves to be correct, quantum effects might indeed play some role in human cognition. Matthew Fisher, a physicist at the University of California, Santa Barbara, raised eyebrows late last year when he published a paper in Annals of Physics proposing that the nuclear spins of phosphorus atoms could serve as rudimentary “qubits” in the brain — which would essentially enable the brain to function like a quantum computer.
As recently as 10 years ago, Fisher’s hypothesis would have been dismissed by many as nonsense. Physicists have been burned by this sort of thing before, most notably in 1989, when Roger Penrose proposed that mysterious protein structures called “microtubules” played a role in human consciousness by exploiting quantum effects. Few researchers believe such a hypothesis plausible. Patricia Churchland, a neurophilosopher at the University of California, San Diego, memorably opined that one might as well invoke “pixie dust in the synapses” to explain human cognition.
The article is here.
Quanta Magazine
November 2, 2016
The mere mention of “quantum consciousness” makes most physicists cringe, as the phrase seems to evoke the vague, insipid musings of a New Age guru. But if a new hypothesis proves to be correct, quantum effects might indeed play some role in human cognition. Matthew Fisher, a physicist at the University of California, Santa Barbara, raised eyebrows late last year when he published a paper in Annals of Physics proposing that the nuclear spins of phosphorus atoms could serve as rudimentary “qubits” in the brain — which would essentially enable the brain to function like a quantum computer.
As recently as 10 years ago, Fisher’s hypothesis would have been dismissed by many as nonsense. Physicists have been burned by this sort of thing before, most notably in 1989, when Roger Penrose proposed that mysterious protein structures called “microtubules” played a role in human consciousness by exploiting quantum effects. Few researchers believe such a hypothesis plausible. Patricia Churchland, a neurophilosopher at the University of California, San Diego, memorably opined that one might as well invoke “pixie dust in the synapses” to explain human cognition.
The article is here.
Thursday, November 24, 2016
Middle School Suicides Reach An All-Time High
Elissa Nadworny
npr.com
Originally posted November 4, 2016
There's a perception that children don't kill themselves, but that's just not true. A new report shows that, for the first time, suicide rates for U.S. middle school students have surpassed the rate of death by car crashes.
The suicide rate among youngsters ages 10 to 14 has been steadily rising, and doubled in the U.S. from 2007 to 2014, according to the Centers for Disease Control and Prevention. In 2014, 425 young people 10 to 14 years of age died by suicide.
The article and the video are here.
National Suicide Hotline: 1-800-273-8255
npr.com
Originally posted November 4, 2016
There's a perception that children don't kill themselves, but that's just not true. A new report shows that, for the first time, suicide rates for U.S. middle school students have surpassed the rate of death by car crashes.
The suicide rate among youngsters ages 10 to 14 has been steadily rising, and doubled in the U.S. from 2007 to 2014, according to the Centers for Disease Control and Prevention. In 2014, 425 young people 10 to 14 years of age died by suicide.
The article and the video are here.
National Suicide Hotline: 1-800-273-8255
Wednesday, November 23, 2016
Increase in US Suicide Rates and the Critical Decline in Psychiatric Beds
Tarun Bastiampillai, Steven S. Sharfstein, & Stephen Allison
JAMA. Published online November 3, 2016
The closure of most US public mental hospital beds and the reduction in acute general psychiatric beds over recent decades have led to a crisis, as overall inpatient capacity has not kept pace with the needs of patients with psychiatric disorders. Currently, state-funded psychiatric beds are almost entirely forensic (ie, allocated to people within the criminal justice system who have been charged or convicted). Very limited access to nonforensic psychiatric inpatient care is contributing to the risks of violence, incarceration, homelessness, premature mortality, and suicide among patients with psychiatric disorders. In particular, a safe minimum number of psychiatric beds is required to respond to suicide risk given the well-established and unchanging prevalence of mental illness, relapse rates, treatment resistance, nonadherence with treatment, and presentations after acute social crisis. Very limited access to inpatient care is likely a contributing factor for the increasing US suicide rate. In 2014, suicide was the second-leading cause of death for people aged between 10 and 34 years and the tenth-leading cause of death for all age groups, with firearm trauma being the leading method.
Currently, the United States has a relatively low 22 psychiatric beds per 100 000 population compared with the Organisation for Economic Cooperation and Development (OECD) average of 71 beds per 100 000 population. Only 4 of the 35 OECD countries (Italy, Chile, Turkey, and Mexico) have fewer psychiatric beds per 100 000 population than the United States. Although European health systems are very different from the US health system, they provide a useful comparison. For instance, Germany, Switzerland, and France have 127, 91, and 87 psychiatric beds per 100 000 population, respectively.
The article is here.
JAMA. Published online November 3, 2016
The closure of most US public mental hospital beds and the reduction in acute general psychiatric beds over recent decades have led to a crisis, as overall inpatient capacity has not kept pace with the needs of patients with psychiatric disorders. Currently, state-funded psychiatric beds are almost entirely forensic (ie, allocated to people within the criminal justice system who have been charged or convicted). Very limited access to nonforensic psychiatric inpatient care is contributing to the risks of violence, incarceration, homelessness, premature mortality, and suicide among patients with psychiatric disorders. In particular, a safe minimum number of psychiatric beds is required to respond to suicide risk given the well-established and unchanging prevalence of mental illness, relapse rates, treatment resistance, nonadherence with treatment, and presentations after acute social crisis. Very limited access to inpatient care is likely a contributing factor for the increasing US suicide rate. In 2014, suicide was the second-leading cause of death for people aged between 10 and 34 years and the tenth-leading cause of death for all age groups, with firearm trauma being the leading method.
Currently, the United States has a relatively low 22 psychiatric beds per 100 000 population compared with the Organisation for Economic Cooperation and Development (OECD) average of 71 beds per 100 000 population. Only 4 of the 35 OECD countries (Italy, Chile, Turkey, and Mexico) have fewer psychiatric beds per 100 000 population than the United States. Although European health systems are very different from the US health system, they provide a useful comparison. For instance, Germany, Switzerland, and France have 127, 91, and 87 psychiatric beds per 100 000 population, respectively.
The article is here.
Subscribe to:
Posts (Atom)