Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Friday, June 30, 2017

Ethics and Artificial Intelligence With IBM Watson's Rob High

Blake Morgan
Forbes.com
Originally posted June 12, 2017

Artificial intelligence seems to be popping up everywhere, and it has the potential to change nearly everything we know about data and the customer experience. However, it also brings up new issues regarding ethics and privacy.

One of the keys to keeping AI ethical is for it to be transparent, says Rob High, vice president and chief technology officer of IBM Watson. When customers interact with a chatbot, for example, they need to know they are communicating with a machine and not an actual human. AI, like most other technology tools, is most effective when it is used to extend the natural capabilities of humans instead of replacing them. That means that AI and humans are best when they work together and can trust each other.

Chatbots are one of the most commonly used forms of AI. Although they can be used successfully in many ways, there is still a lot of room for growth. As they currently stand, chatbots mostly perform basic actions like turning on lights, providing directions, and answering simple questions that a person asks directly. However, in the future, chatbots should and will be able to go deeper to find the root of the problem. For example, a person asking a chatbot what her bank balance is might be asking the question because she wants to invest money or make a big purchase—a futuristic chatbot could find the real reason she is asking and turn it into a more developed conversation. In order to do that, chatbots will need to ask more questions and drill deeper, and humans need to feel comfortable providing their information to machines.

The article is here.

Ethical Interventions Means Giving Consumers A Say

Susan Liautaud
Wired Magazine
Originally published June 12, 2017

Here is an excerpt:

Increasingly, the people and companies with the technological or scientific ability to create new products or innovations are de facto making policy decisions that affect human safety and society. But these decisions are often based on the creator’s intent for the product, and they don't always take into account its potential risks and unforeseen uses. What if gene-editing is diverted for terrorist ends? What if human-pig chimeras mate? What if citizens prefer to see birds rather than flying cars when they look out a window? (Apparently, this is a real risk. Uber plans to offer flight-hailing apps by 2020.) What if Echo Look leads to mental health issues for teenagers? Who bears responsibility for the consequences?

Jennifer Doudna and Emmanuelle Charpentier’s landmark 2014 article in Science, “The new frontier of genome engineering with CRISPR-Cas9,” called for a broader discussion among “scientists and society at large” about the technology's responsible use. Other leading scientists have joined the call for caution before the technique is intentionally used to alter the human germ line. The National Academies of Science, Engineering, and Medicine recently issued a report recommending that the ethical framework applied to gene therapy also be used when considering Crispr applications. In effect, the experts ask whether their scientific brilliance should legitimize them as decision-makers for all of us.

Crispr might prevent Huntington’s disease and cure cancer. But should errors occur, it’s hard to predict the outcome or prevent its benign use (by thoughtful and competent people) or misuse (by ill-intentioned actors).

Who should decide how Crispr should be used: Scientists? Regulators? Something in between, such as an academic institution, medical research establishment, or professional/industry association? The public? Which public, given the global impact of the decisions? Are ordinary citizens equipped to make such technologically complex ethical decisions? Who will inform the decision-makers about possible risks and benefits?

The article is here.

Thursday, June 29, 2017

Can a computer administer a Wechsler Intelligence Test?

Vrana, Scott R.; Vrana, Dylan T.
Professional Psychology: Research and Practice, Vol 48(3), Jun 2017, 191-198.

Abstract

Prompted by the rapid development of Pearson’s iPad-based Q-interactive platform for administering individual tests of cognitive ability (Pearson, 2016c), this article speculates about what it would take for a computer to administer the current versions of the Wechsler individual intelligence tests without the involvement of a psychologist or psychometrist. We consider the mechanics of administering and scoring each subtest and the more general clinical skills of motivating the client to perform, making observations of verbal and nonverbal behavior, and responding to the client’s off-task comments, questions, and nonverbal cues. It is concluded that we are very close to the point, given current hardware and artificial intelligence capabilities, at which administration of all subtests of the Wechsler Adult Intelligence Scale-Fourth Edition (PsychCorp, 2008) and Wechsler Intelligence Scale for Children-Fifth Edition (PsychCorp, 2014), and all assessment functions of the human examiner, could be performed by a computer. Potential acceptability of computer administration by clients and the psychological community are considered.

The article is here.

When is a leak ethical?

Cassandra Burke Robertson
The Conversation
Originally published June 12, 2017

Here is an excerpt:

Undoubtedly, leaking classified information violates the law. For some individuals, such as lawyers, leaking unclassified but still confidential information may also violate the rules of professional conduct.

But when is it ethical to leak?

Public interest disclosures

I am a scholar of legal ethics who has studied ethical decision-making in the political sphere.

Research has found that people are willing to blow the whistle when they believe that their organization has engaged in “corrupt and illegal conduct.” They may also speak up to prevent larger threats to cherished values, such as democracy and the rule of law. Law professor Kathleen Clark uses the phrase “public interest disclosures” to refer to such leaks.

Scholars who study leaking suggest that it can indeed be ethical to leak when the public benefit of the information is strong enough to outweigh the obligation to keep it secret.

The article is here.

Wednesday, June 28, 2017

How Milton Bradley’s morality play shaped the modern board game

An interview with  Tristan Donovan by Christopher Klein
The Boston Globe
Originally published May 26, 2017

Here is an excerpt:

Donovan: By 1860, America had the start of the board game industry, but it wasn’t big. Production was done mostly by hand, since there weren’t big printing presses. An added complication at the time was that America was a much more puritanical society, and game-playing of any kind was seen by many as sinful and a waste of time.

Milton Bradley himself was fairly devout. When he set out to make a board game, he was worried his friends would frown upon it, so he wanted to make a game that would teach morality. The basic idea of The Checkered Game of Life was to amass points and in the end reach “Happy Old Age.” You could accumulate points by landing on squares for virtues such as honor and happiness, and there were squares to avoid such as gambling and idleness. It’s steering players to the righteous path.

Ideas: That morality also complicated game play.

Donovan: Dice were considered evil and associated with gambling by many, so instead he used a teetotum, which had a series of numbers printed on it that you spun like a top.

Ideas: George Parker, on the other hand, built his name on rejecting a lot of those conventions.

Donovan: All the games that were available to Parker growing up were largely morality tales like The Checkered Game of Life. He was fed up with it. He wanted to play a game and didn’t want it to be a Sunday sermon every time. His first game, Banking, was basically about amassing money through speculation. The goal was to be the richest, rather than the first to achieve a happy old age. Parker created games that were about fun and making money, which found appeal as Gilded Age America transitioned from a Puritanical society to one about making money and doing well in a career.

The interview is here.

A Teachable Ethics Scandal

Mitchell Handelsman
Teaching of Psychology

Abstract

In this article, I describe a recent scandal involving collusion between officials at the American Psychological Association (APA) and the U.S. Department of Defense, which appears to have enabled the torture of detainees at the Guantanamo Bay detention facility. The scandal is a relevant, complex, and engaging case that teachers can use in a variety of courses. Details of the scandal exemplify a number of psychological concepts, including obedience, groupthink, terror management theory, group influence, and motivation. The scandal can help students understand several factors that make ethical decision-making difficult, including stress, emotions, and cognitive factors such as loss aversion, anchoring, framing, and ethical fading. I conclude by exploring some parallels between the current torture scandal and the development of APA’s ethics guidelines regarding the use of deception in research.

The article is here.

Tuesday, June 27, 2017

Resisting Temptation for the Good of the Group: Binding Moral Values and the Moralization of Self-Control

Mooijman, Marlon; Meindl, Peter; Oyserman, Daphna; Monterosso, John; Dehghani, Morteza; Doris, John M.; Graham, Jesse
Journal of Personality and Social Psychology, Jun 12 , 2017.

Abstract

When do people see self-control as a moral issue? We hypothesize that the group-focused “binding” moral values of Loyalty/betrayal, Authority/subversion, and Purity/degradation play a particularly important role in this moralization process. Nine studies provide support for this prediction. First, moralization of self-control goals (e.g., losing weight, saving money) is more strongly associated with endorsing binding moral values than with endorsing individualizing moral values (Care/harm, Fairness/cheating). Second, binding moral values mediate the effect of other group-focused predictors of self-control moralization, including conservatism, religiosity, and collectivism. Third, guiding participants to consider morality as centrally about binding moral values increases moralization of self-control more than guiding participants to consider morality as centrally about individualizing moral values. Fourth, we replicate our core finding that moralization of self-control is associated with binding moral values across studies differing in measures and design—whether we measure the relationship between moral and self-control language across time, the perceived moral relevance of self-control behaviors, or the moral condemnation of self-control failures. Taken together, our findings suggest that self-control moralization is primarily group-oriented and is sensitive to group-oriented cues.

The article is here.

No Pain, All Gain: The Case for Farming Organs in Brainless Humans

Ruth Stirton and David Lawrence
BMJ Blogs
Originally posted June 10, 2017

Here is an excerpt:

A significant challenge to this practice is that it is probably unethical to use an animal in this way for the benefit of humans. Pigs in particular have a relatively high level of sentience and consciousness, which should not be dismissed lightly.  Some would argue that animals with certain levels of sentience and consciousness – perhaps those capable of understanding what is happening to them – have moral worth and are entitled to respect and protection, and to be treated with dignity.  It is inappropriate to simply use them for the benefit of humanity.  Arguably, the level of protection ought to correlate to the level of understanding (or personhood), and thus the pig deserves a greater level of protection than the sea cucumber.  The problem here is that the sea cucumber is not sufficiently similar to the human to be of use to us when we’re thinking about organs for transplantation purposes.  The useful animals are those closest to us, which are by definition those animals with more complex brains and neural networks, and which consequently attract higher moral value.

The moral objection to using animals in this way arises because of their levels of cognition.  This moral objection would disappear if we could prevent the animals ever developing the capacity for consciousness: they would never become entities capable of being harmed.  If we were able to genetically engineer a brainless pig, leaving only the minimal neural circuits necessary to maintain heart and lung function,  it could act as organic vessel for growing organs for transplantation.  The objection based on the use of a conscious animal disappears, since this entity – it’s not clear the extent to which is it possible to call it an animal – would have no consciousness.

The blog post is here.

Monday, June 26, 2017

What’s the Point of Professional Ethical Codes?

Iain Brassington
BMJ Blogs
June 13, 2017

Here is an excerpt:

They can’t be meant as a particularly useful tool for solving deep moral dilemmas: they’re much too blunt for that, often presuppose too much, and tend to bend to suit the law.  To think that because the relevant professional code enjoins x it follows that x is permissible or right smacks of a simple appeal to authority, and this flies in the face of what it is to be a moral agent in the first place.  But what a professional code of ethics may do is to provide a certain kind of Bolamesque legal defence: if your having done φ attracts a claim that it’s negligent or unreasonable or something like that, being able to point out that your professional body endorses φ-ing will help you out.  But professional ethics, and what counts as professional discipline, stretches way beyond that.  For example, instances of workplace bullying can be matters of great professional and ethical import, but it’s not at all obvious that the law should be involved.

There’s a range of reasons why someone’s behaviour might be of professional ethical concern.  Perhaps the most obvious is a concern for public protection.  If someone has been found to have behaved in a way that endangers third parties, then the profession may well want to intervene.

The blog post is here.

Antecedents and Consequences of Medical Students’ Moral Decision Making during Professionalism Dilemmas

Lynn Monrouxe, Malissa Shaw, and Charlotte Rees
AMA Journal of Ethics. June 2017, Volume 19, Number 6: 568-577.

Abstract

Medical students often experience professionalism dilemmas (which differ from ethical dilemmas) wherein students sometimes witness and/or participate in patient safety, dignity, and consent lapses. When faced with such dilemmas, students make moral decisions. If students’ action (or inaction) runs counter to their perceived moral values—often due to organizational constraints or power hierarchies—they can suffer moral distress, burnout, or a desire to leave the profession. If moral transgressions are rationalized as being for the greater good, moral distress can decrease as dilemmas are experienced more frequently (habituation); if no learner benefit is seen, distress can increase with greater exposure to dilemmas (disturbance). We suggest how medical educators can support students’ understandings of ethical dilemmas and facilitate their habits of enacting professionalism: by modeling appropriate resistance behaviors.

The article is here.

Sunday, June 25, 2017

Managing for Academic Integrity in Higher Education: Insights From Behavioral Ethics

Sheldene Simola
Scholarship of Teaching and Learning in Psychology
Vol 3(1), Mar 2017, 43-57.

Despite the plethora of research on factors associated with academic dishonesty and ways of averting it, such dishonesty remains a significant concern. There is a need to identify overarching frameworks through which academic dishonesty might be understood, which might also suggest novel yet research-supported practical insights aimed at prevention. Hence, this article draws upon the burgeoning field of behavioral ethics to highlight a dual processing framework on academic dishonesty and to provide additional and sometimes counterintuitive practical insights into preventing this predicament. Six themes from within behavioral ethics are elaborated. These indicate the roles of reflective, conscious deliberation in academic (dis)honesty, as well as reflexive, nonconscious judgment; the roles of rationality and emotionality; and the ways in which conscious and nonconscious situational cues can cause individual moral identity or moral standards to become more or less salient to, and therefore influential in, decision-making. Practical insights and directions for future research are provided.

The article is here.

Saturday, June 24, 2017

Consistent Belief in a Good True Self in Misanthropes and Three Interdependent Cultures.

J. De Freitas, H. Sarkissian, G. E. Newman, I. Grossmann, and others
Cognitive Science, 2017 Jun 6.

Abstract

People sometimes explain behavior by appealing to an essentialist concept of the self, often referred to as the true self. Existing studies suggest that people tend to believe that the true self is morally virtuous; that is deep inside, every person is motivated to behave in morally good ways. Is this belief particular to individuals with optimistic beliefs or people from Western cultures, or does it reflect a widely held cognitive bias in how people understand the self? To address this question, we tested the good true self theory against two potential boundary conditions that are known to elicit different beliefs about the self as a whole. Study 1 tested whether individual differences in misanthropy-the tendency to view humans negatively-predict beliefs about the good true self in an American sample. The results indicate a consistent belief in a good true self, even among individuals who have an explicitly pessimistic view of others. Study 2 compared true self-attributions across cultural groups, by comparing samples from an independent country (USA) and a diverse set of interdependent countries (Russia, Singapore, and Colombia). Results indicated that the direction and magnitude of the effect are comparable across all groups we tested. The belief in a good true self appears robust across groups varying in cultural orientation or misanthropy, suggesting a consistent psychological tendency to view the true self as morally good.

A version of the paper is here.

Friday, June 23, 2017

Speaking up about traditional and professionalism-related patient safety threats: a national survey of interns and residents

Martinez W, Lehmann LS, Thomas EJ, et al
BMJ Qual Saf Published Online First: 25 April 2017.

Background Open communication between healthcare professionals about care concerns, also known as ‘speaking up’, is essential to patient safety.

Objective Compare interns' and residents' experiences, attitudes and factors associated with speaking up about traditional versus professionalism-related safety threats.

Design Anonymous, cross-sectional survey.

Setting Six US academic medical centres, 2013–2014.

Participants 1800 medical and surgical interns and residents (47% responded).

Measurements Attitudes about, barriers and facilitators for, and self-reported experience with speaking up. Likelihood of speaking up and the potential for patient harm in two vignettes. Safety Attitude Questionnaire (SAQ) teamwork and safety scales; and Speaking Up Climate for Patient Safety (SUC-Safe) and Speaking Up Climate for Professionalism (SUC-Prof) scales.

Results Respondents more commonly observed unprofessional behaviour (75%, 628/837) than traditional safety threats (49%, 410/837); p<0.001, but reported speaking up about unprofessional behaviour less commonly (46%, 287/628 vs 71%, 291/410; p<0.001). Respondents more commonly reported fear of conflict as a barrier to speaking up about unprofessional behaviour compared with traditional safety threats (58%, 482/837 vs 42%, 348/837; p<0.001). Respondents were also less likely to speak up to an attending physician in the professionalism vignette than the traditional safety vignette, even when they perceived high potential patient harm (20%, 49/251 vs 71%, 179/251; p<0.001). Positive perceptions of SAQ teamwork climate and SUC-Safe were independently associated with speaking up in the traditional safety vignette (OR 1.90, 99% CI 1.36 to 2.66 and 1.46, 1.02 to 2.09, respectively), while only a positive perception of SUC-Prof was associated with speaking up in the professionalism vignette (1.76, 1.23 to 2.50).

Conclusions Interns and residents commonly observed unprofessional behaviour yet were less likely to speak up about it compared with traditional safety threats even when they perceived high potential patient harm. Measuring SUC-Safe, and particularly SUC-Prof, may fill an existing gap in safety culture assessment.

The article is here.

Moral Injury, Posttraumatic Stress Disorder, and Suicidal Behavior Among National Guard Personnel.

Craig Bryan, Anna Belle Bryan, Erika Roberge, Feea Leifker, & David Rozek
Psychological Trauma: Theory, Research, Practice, and Policy 

Abstract

To empirically examine similarities and differences in the signs and symptoms of posttraumatic stress disorder (PTSD) and moral injury and to determine if the combination of these 2 constructs is associated with increased risk for suicidal thoughts and behaviors in a sample of U.S. National Guard personnel. Method: 930 National Guard personnel from the states of Utah and Idaho completed an anonymous online survey. Exploratory structural equation modeling (ESEM) was used to test a measurement model of PTSD and moral injury. A structural model was next constructed to test the interactive effects of PTSD and moral injury on history of suicide ideation and attempts. Results: Results of the ESEM confirmed that PTSD and moral injury were distinct constructs characterized by unique symptoms, although depressed mood loaded onto both PTSD and moral injury. The interaction of PTSD and moral injury was associated with significantly increased risk for suicide ideation and attempts. A sensitivity analysis indicated the interaction remained a statistically significant predictor of suicide attempt even among the subgroup of participants with a history of suicide ideation. Conclusion: PTSD and moral injury represent separate constructs with unique signs and symptoms. The combination of PTSD and moral injury confers increased risk for suicidal thoughts and behaviors, and differentiates between military personnel who have attempted suicide and those who have only thought about suicide.

The article is here.

Thursday, June 22, 2017

Is it dangerous for humans to depend on computers?

Rory Cellan-Jones
BBC News
Originally published June 5, 2017

Here is an excerpt:

In Britain, doctors whose computers froze during the recent ransomware attack had to turn patients away. In Ukraine, there were power cuts when hackers attacked the electricity system, and five years ago, millions of Royal Bank of Scotland customers were unable to get at their money for days after problems with a software upgrade.

Already some people have had enough. This week a letter to the Guardian newspaper warned that the modern world was "dangerously exposed by this reliance on the internet and new technology".
The correspondent, quite possibly a retired government employee, continued "there are just enough old-time civil servants left alive to turn back the clock and take away our dangerous dependence on modern technology."

Somehow, though, I don't see this happening. Airlines are not going to scrap the computers and tick passengers off on a paper list before they climb aboard, bank clerks will not be entering transactions in giant ledgers in copperplate writing.

In fact, computers will take over more and more functions once restricted to humans, most of them far more useful than a game of Go. And that means that at home, at work and at play we will have to get used to seeing our lives disrupted when those clever machines suffer the occasional nervous breakdown.

The article is here.

Teaching Humility in an Age of Arrogance

Michael Patrick Lynch
The Chronicle of Higher Education
Originally published June 5, 2017

Here is an excerpt:

Our cultural embrace of epistemic or intellectual arrogance is the result of a toxic mix of technology, psychology, and ideology. To combat it, we have to reconnect with some basic values, including ones that philosophers have long thought were essential both to serious intellectual endeavors and to politics.

One of those ideas, as I just noted, is belief in objective truth. But another, less-noted concept is intellectual humility. By intellectual humility, I refer to a cluster of attitudes that we can take toward ourselves — recognizing your own fallibility, realizing that you don’t really know as much as you think, and owning your limitations and biases.

But being intellectually humble also means taking an active stance. It means seeing your worldview as open to improvement by the evidence and experience of other people. Being open to improvement is more than just being open to change. And it isn’t just a matter of self-improvement — using your genius to know even more. It is a matter of seeing your view as capable of improvement because of what others contribute.

Intellectual humility is not the same as skepticism. Improving your knowledge must start from a basis of rational conviction. That conviction allows you to know when to stop inquiring, when to realize that you know enough — that the earth really is round, the climate is warming, the Holocaust happened, and so on. That, of course, is tricky, and many a mistake in science and politics have been made because someone stopped inquiring before they should have. Hence the emphasis on evidence; being intellectually humble requires being responsive to the actual evidence, not to flights of fancy or conspiracy theories.

The article is here.

Wednesday, June 21, 2017

The Specialists’ Stranglehold on Medicine

Jamie Koufman
The New York Times - Opinion
Originally posted June 3, 2017

Here is an excerpt:

Neither the Affordable Care Act nor the Republicans’ American Health Care Act addresses the way specialists are corrupting our health care system. What we really need is what I’d call a Health Care Accountability Act.

This law would return primary care to the primary care physician. Every patient should have one trusted doctor who is responsible for his or her overall health. Resources must be allocated to expand those doctors’ education and training. And then we have to pay them more.

There are approximately 860,000 practicing physicians in the United States today, and too few — about a third — deliver primary care. In general, they make less than half as much money as specialists. I advocate a 10 percent to 20 percent reduction in specialist reimbursement, with that money being allocated to primary care doctors.

Those doctors should have to approve specialist referrals — they would be the general contractor in the building metaphor. There is strong evidence that long-term oversight by primary care doctors increases the quality of care and decreases costs.

The bill would mandate the disclosure of procedures’ costs up front. The way it usually works now is that right before a medical procedure, patients are asked to sign multiple documents, including a guarantee that they will pay whatever is not covered by insurance.  But they will have no way of knowing what the procedure actually costs. Their insurance may cover 90 percent, but are they liable for 10 percent of $10,000 or $100,000?

We also need more oversight of those costs. Instead of letting specialists’ lobbyists set costs, payment algorithms should be determined by doctors with no financial stake in the field, or even by non-physicians like economists. An Independent Payment Advisory Board was created by Obamacare; it should be expanded and adequately funded.

The article is here.

The GOP's risky premium pledge

Jennifer Haberkorn
Politico.com
Originally posted June 5, 2017

Senate Republicans may be all over the map on an Obamacare repeal plan, but on one fundamental point — reducing insurance premiums — they are in danger of overpromising and underdelivering.

The reality is they have only a few ways to reduce Americans’ premiums: Offer consumers bigger subsidies. Allow insurers to offer skimpier coverage. Or permit insurers to charge more — usually much more — to those with pre-existing illnesses and who are older and tend to rack up the biggest bills.

Since there’s no appetite within the GOP for throwing more taxpayer money at the problem, Republicans will need to make some hard decisions to hit their goal. But the effort to drive down premium prices will inevitably create a new set of winners and losers and complicate leadership’s path to the 50 votes they need to fulfill their seven-year promise to repeal Obamacare.

“Anyone can figure out how to reduce premiums,” said Sen. Chris Murphy (D-Conn.). “You can reduce premiums by kicking everybody that has a pre-existing condition off insurance or dramatically reducing benefits.”

Republicans say that Obamacare’s insurance regulations are responsible for making coverage prohibitively expensive and contend that premiums would fall if those rules are rolled back. They say they have multiple ideas about how to roll those back while also insulating the most vulnerable but have yet to weave those together into actual legislation.

The article is here.

Tuesday, June 20, 2017

Face-saving or fair-minded: What motivates moral behavior?

Alexander W. Cappelen  Trond Halvorsen  Erik Ø. Sørensen  Bertil Tungodden
Journal of the European Economic Association (2017) 15 (3): 540-557.

Abstract

We study the relative importance of intrinsic moral motivation and extrinsic social motivation in explaining moral behavior. The key feature of our experiment is that we introduce a dictator game design that manipulates these two sources of motivation. In one set of treatments, we manipulate the moral argument for sharing, in another we manipulate the information given to the recipient about the context of the experiment and the dictator's decision. The paper offers two main findings. First, we provide evidence of intrinsic moral motivation being of fundamental importance. Second, we show that extrinsic social motivation matters and is crowding-in with intrinsic moral motivation. We also show that intrinsic moral motivation is strongly associated with self-reported charitable giving outside the lab and with political preferences.

The research is here.

Theory from the ruins

Stuart Walton
Aeon
Originally posted May 31, 2017

Here is an excerpt:

When reason enabled human beings to interpret the natural world around them in ways that ceased to frighten them, it was a liberating faculty of the mind. However, in the Frankfurt account, its fatal flaw was that it depended on domination, on subjecting the external world to the processes of abstract thought. Eventually, by a gradual process of trial and error, everything in the phenomenal world would be explained by scientific investigation, which would lay bare the previously hidden rules and principles by which it operated, and which could be demonstrated anew any number of times. The rationalising faculty had thereby become, according to the Frankfurt philosophers, a tyrannical process, through which all human experience of the world would be subjected to infinitely repeatable rational explanation; a process in which reason had turned from being liberating to being the instrumental means of categorising and classifying an infinitely various reality.

Culture itself was subject to a kind of factory production in the cinema and recording industries. The Frankfurt theorists maintained a deep distrust of what passed as ‘popular culture’, which neither enlightened nor truly entertained the mass of society, but only kept people in a state of permanently unsatiated demand for the dross with which they were going to be fed anyway. And driving the whole coruscating analysis was a visceral commitment to the Marxist theme of the presentness of the past. History was not just something that happened yesterday, but a dynamic force that remained active in the world of today, which was its material product and its consequence. By contrast, the attitude of instrumental reason produced only a version of the past that ascended towards the triumph of the enlightened and democratic societies of the present day.

Since these ideas were first elaborated, they have been widely rejected or misunderstood. Postmodernism, which refuses all historical grand narratives, has done its best to overlook what are some of the grandest narratives that Western philosophy ever produced. Despite this, these polemical theories remain indispensable in the present globalised age, when the dilemmas and malaises that were once specific to Western societies have expanded to encompass almost the whole globe. As a new era of irrationalism dawns on humankind, with corruption and mendacity becoming a more or less openly avowed modus operandi of all shades of government, the Frankfurt analysis urges itself upon us once more.

The article is here.

Monday, June 19, 2017

The Value of Sharing Information: A Neural Account of Information Transmission

Elisa C. Baek, Christin Scholz, Matthew Brook O’Donnell, & Emily Falk
Psychological Science
May 2017

Abstract

Humans routinely share information with one another. What drives this behavior? We used neuroimaging to test an account of information selection and sharing that emphasizes inherent reward in self-reflection and connecting with other people. Participants underwent functional MRI while they considered personally reading and sharing New York Times articles. Activity in neural regions involved in positive valuation, self-related processing, and taking the perspective of others was significantly associated with decisions to select and share articles, and scaled with preferences to do so. Activity in all three sets of regions was greater when participants considered sharing articles with other people rather than selecting articles to read themselves. The findings suggest that people may consider value not only to themselves but also to others even when selecting news articles to consume personally. Further, sharing heightens activity in these pathways, in line with our proposal that humans derive value from self-reflection and connecting to others via sharing.

The article is here.

The behavioral and neural basis of empathic blame

Indrajeet Patil, Marta Calò, Federico Fornasier, Fiery Cushman, Giorgia Silani
Forthcoming in Scientific Reports

Abstract

Mature moral judgments rely both on a perpetrator’s intent to cause harm, and also on the actual harm caused—even when unintended. Much prior research asks how intent information is represented neurally, but little asks how even unintended harms influence judgment. We interrogate the psychological and neural basis of this process, focusing especially on the role of empathy for the victim of a harmful act. Using fMRI, we found that the ‘empathy for pain’ network was involved in encoding harmful outcomes and integrating harmfulness information for different types of moral judgments, and individual differences in the extent to which this network was active during encoding and integration of harmfulness information determined severity of moral judgments. Additionally, activity in the network was down-regulated for acceptability, but not blame, judgments for accidental harm condition, suggesting that these two types of moral evaluations are neurobiologically dissociable. These results support a model of “empathic blame”, whereby the perceived suffering of a victim colors moral judgment of an accidental harmdoer.

The paper is here.

Sunday, June 18, 2017

Has Physician-Assisted Death Become the “Good Death?”

Franklin G. Miller
The Hastings Center
Originally published May 30, 2017

“Death with dignity” for the past 40 years has meant, for many people, avoiding unwanted medical technology and dying in a hospital.  A “natural” death at home or in a hospice facility has been the goal.   During the last 20 years, physician-assisted suicide has been legalized for terminally ill patients in several states of the United States, and recently “medical assistance in dying,” which also includes active euthanasia, has become legal in Canada.  How should we think about what constitutes a good death now?

There are signs of a cultural shift, in which physician-assisted death is not just a permitted choice by which individuals can control the timing and circumstances of their death but is taken as a model of the good death.  A recent lengthy front page article in the New York Times recounts a case of physician-assisted death in Canada in a way that strongly suggests that a planned, orchestrated death is the ideal way to die.  While I have long supported a legal option of physician-assisted suicide for the terminally ill, I believe that this cultural shift deserves critical scrutiny.

The article is here.

Saturday, June 17, 2017

Taking Single-Payer Seriously

Dave Kamper
Jacobin Magazine
Originally published May 28, 2017

Here is an excerpt:

Medicare for All wouldn’t just scrap Obamacare — it would uproot the entire industry. It would be a huge efficiency savings. But it would also be devastating in the short term for hundreds of thousands of working people whose only crime was getting a job at an insurance company, and the hundreds of thousands more who work as billing specialists for clinics and hospitals (the number of medical assistants shot up 44 percent between 2011 and 2016). Yes, the CEO of United Health Group made $101 million in 2011. But few of the 230,000 other people working for the company saw money like that.

Bernie Sanders’s recently announced Medicare for All plan asserts that we “need a health care system that significantly reduces overhead, administrative costs, and complexity,” and projects that his plan would save $6 trillion over ten years.

The article is here.

Friday, June 16, 2017

Do You Want to Be a Cyborg?

Agata Sagan and Peter Singer
Project Syndicate
Originally posted May 17, 2017

Here is an excerpt:

In the United States, Europe, and most other countries with advanced biomedical research, strict regulations on the use of human subjects would make it extremely difficult to get permission to carry out experiments aimed at enhancing our cognitive abilities by linking our brains to computers. US regulations drove Phil Kennedy, a pioneer in the use of computers to enable paralyzed patients to communicate by thought alone, to have electrodes implanted in his own brain in order to make further scientific progress. Even then, he had to go to Belize, in Central America, to find a surgeon willing to perform the operation. In the United Kingdom, cyborg advocate Kevin Warwick and his wife had data arrays implanted in their arms to show that direct communication between the nervous systems of separate human beings is possible.

Musk has suggested that the regulations governing the use of human subjects in research could change. That may take some time. Meanwhile freewheeling enthusiasts are going ahead anyway. Tim Cannon doesn’t have the scientific or medical qualifications of Phil Kennedy or Kevin Warwick, but that hasn’t stopped him from co-founding a Pittsburgh company that implants bionic devices, often after he has first tried them out on himself. His attitude is, as he said at an event billed as “The world’s first cyborg-fair,” held in Düsseldorf in 2015, “Let’s just do it and really go for it.”

People at the Düsseldorf cyborg-fair had magnets, radio frequency identification chips, and other devices implanted in their fingers or arms. The surgery is often carried out by tattooists and sometimes veterinarians, because qualified physicians and surgeons are reluctant to operate on healthy people.

The article is here.

On What Basis Do Terrorists Make Moral Judgments?

Kendra Pierre-Louis
Popular Science
Originally published May 26, 2017

Here is an excerpt:

“Multiple studies across the world have systematically shown that in judging the morality of an action, civilized individuals typically attach greater importance to intentions than outcomes,” Ibáñez told PopSci. “If an action is aimed to induce harm, it does not matter whether it was successful or not: most people consider it as less morally admissible than other actions in which harm was neither intended nor inflicted, or even actions in which harm was caused by accident.”

For most of us, intent matters. If I mean to slam you to the ground and I fail, that’s far worse than if I don’t mean to slam you to the ground and I do. If that sounds like a no-brainer, you should know that for the terrorists in the study, the morality was flipped. They rated accidental harm as worse than the failed intentional harm, because in one situation someone doesn’t get hurt, while in the second situation someone does. Write the study’s authors, “surprisingly, this moral judgement resembles that observed at early development stages.”

Perhaps more chilling, this tendency to focus on the outcomes rather than the underlying intention means that the terrorists are focused more on outcomes than your average person, and that terror behavior is "goal directed." Write the study's authors "... our sample is characterized by a general tendency to focus more on the outcomes of actions than on the actions' underlying intentions." In essence terrorism is the world's worst productivity system, because when coupled with rational choice theory—which says that we tend to act in ways that maximize getting our way with the least amount of personal sacrifice—murdering a lot of people to get your goal, absent moral stigma, starts to make sense.

The article is here.

Thursday, June 15, 2017

How the Science of “Blue Lies” May Explain Trump’s Support

Jeremy Adam Smith
Scientific American
Originally posted on March 24, 2017

Here are two excerpts:

This has led many people to ask themselves: How does the former reality-TV star get away with it? How can he tell so many lies and still win support from many Americans?

Journalists and researchers have suggested many answers, from a hyperbiased, segmented media to simple ignorance on the part of GOP voters. But there is another explanation that no one seems to have entertained. It is that Trump is telling “blue lies”—a psychologist’s term for falsehoods, told on behalf of a group, that can actually strengthen bonds among the members of that group.

(cut)

This research—and these stories—highlights a difficult truth about our species: we are intensely social creatures, but we are prone to divide ourselves into competitive groups, largely for the purpose of allocating resources. People can be prosocial—compassionate, empathetic, generous, honest—in their group and aggressively antisocial toward out-groups. When we divide people into groups, we open the door to competition, dehumanization, violence—and socially sanctioned deceit.

“People condone lying against enemy nations, and since many people now see those on the other side of American politics as enemies, they may feel that lies, when they recognize them, are appropriate means of warfare,” says George Edwards, a political scientist at Texas A&M University and one of the country’s leading scholars of the presidency.

The article is here.

Act Versus Impact: Conservatives and Liberals Exhibit Different Structural Emphases in Moral Judgment

Ivar R. Hannikainen, M. Miller, A. Cushman
Ratio. (2017 )
doi:10.1111/rati.12162

Abstract

Conservatives and liberals disagree sharply on matters of morality and public policy. We propose a novel account of the psychological basis of these differences. Specifically, we find that conservatives tend to emphasize the intrinsic value of actions during moral judgment, in part by mentally simulating themselves performing those actions, while liberals instead emphasize the value of the expected outcomes of the action. We then demonstrate that a structural emphasis on actions is linked to the condemnation of victimless crimes, a distinctive feature of conservative morality. Next, we find that the conservative and liberal structural approaches to moral judgment are associated with their corresponding patterns of reliance on distinct moral foundations. In addition, the structural approach uniquely predicts that conservatives will be more opposed to harm in circumstances like the well-known trolley problem, a result which we replicate. Finally, we show that the structural approaches of conservatives and liberals are partly linked to underlying cognitive styles (intuitive versus deliberative). Collectively, these findings forge a link between two important yet previously independent lines of research in political psychology: cognitive style and moral foundations theory.

The article is here.

Wednesday, June 14, 2017

You’re Not Going to Change Your Mind

Ben Tappin, Leslie Van Der Leer and Ryan McKay
The New York Times
Originally published May 28, 2017

A troubling feature of political disagreement in the United States today is that many issues on which liberals and conservatives hold divergent views are questions not of value but of fact. Is human activity responsible for global warming? Do guns make society safer? Is immigration harmful to the economy?

Though undoubtedly complicated, these questions turn on empirical evidence. As new information emerges, we ought to move, however fitfully, toward consensus.

But we don’t. Unfortunately, people do not always revise their beliefs in light of new information. On the contrary, they often stubbornly maintain their views. Certain disagreements stay entrenched and polarized.

Why? A common explanation is confirmation bias. This is the psychological tendency to favor information that confirms our beliefs and to disfavor information that counters them — a tendency manifested in the echo chambers and “filter bubbles” of the online world.

If this explanation is right, then there is a relatively straightforward solution to political polarization: We need to consciously expose ourselves to evidence that challenges our beliefs to compensate for our inclination to discount it.

But what if confirmation bias isn’t the only culprit?

The article is here.

Should We Outsource Our Moral Beliefs to Others?

Grace Boey
3 Quarks Daily
Originally posted May 29, 2017

Here is an excerpt:

Setting aside the worries above, there is one last matter that many philosophers take to be the most compelling candidate for the oddity of outsourcing our moral beliefs to others. As moral agents, we’re interested in more than just accumulating as many true moral beliefs as possible, such as ‘abortion is permissible’, or ‘killing animals for sport is wrong’. We also value things such as developing moral understanding, cultivating virtuous characters, having appropriate emotional reactions, and the like. Although moral deference might allow us to acquire bare moral knowledge from others, it doesn’t allow us to reflect or cultivate these other moral goods which are central to our moral identity.

Consider the value we place on understanding why we think our moral beliefs are true. Alison Hills notes that pure moral deference can’t get us to such moral understanding. When Bob defers unquestioningly to Sally’s judgment that abortion is morally permissible, he lacks an understanding of why this might be true. Amongst other things, this prevents Bob from being able to articulate, in his own words, the reasons behind this claim. This seems strange enough in itself, and Hills argues for at least two reasons why Bob’s situation is a bad one. For one, Bob’s lack of moral understanding prevents him from acting in a morally worthy way. Bob wouldn’t deserve any moral praise for, say, shutting down someone who harasses women who undergo the procedure.

Moreover, Bob’s lack of moral understanding seems to reflect a lack of good moral character, or virtue. Bob’s belief that ‘late-term abortion is permissible’ isn’t integrated with the rest of his thoughts, motivations, emotions, and decisions. Moral understanding, of course, isn’t all that matters for virtue and character. But philosophers who disagree with Hills on this point, like Robert Howell and Errol Lord, also note that moral deference reflects a lack of virtue and character in other ways, and can prevent the cultivation of these traits.

The article is here.

Tuesday, June 13, 2017

Why It’s So Hard to Admit You’re Wrong

Kristin Wong
The New York Times
Originally published May 22, 2017

Here are two excerpts:

Mistakes can be hard to digest, so sometimes we double down rather than face them. Our confirmation bias kicks in, causing us to seek out evidence to prove what we already believe. The car you cut off has a small dent in its bumper, which obviously means that it is the other driver’s fault.

Psychologists call this cognitive dissonance — the stress we experience when we hold two contradictory thoughts, beliefs, opinions or attitudes.

(cut)

“Cognitive dissonance is what we feel when the self-concept — I’m smart, I’m kind, I’m convinced this belief is true — is threatened by evidence that we did something that wasn’t smart, that we did something that hurt another person, that the belief isn’t true,” said Carol Tavris, a co-author of the book “Mistakes Were Made (But Not by Me).”

She added that cognitive dissonance threatened our sense of self.

“To reduce dissonance, we have to modify the self-concept or accept the evidence,” Ms. Tavris said. “Guess which route people prefer?”

Or maybe you cope by justifying your mistake. The psychologist Leon Festinger suggested the theory of cognitive dissonance in the 1950s when he studied a small religious group that believed a flying saucer would rescue its members from an apocalypse on Dec. 20, 1954. Publishing his findings in the book “When Prophecy Fails,” he wrote that the group doubled down on its belief and said God had simply decided to spare the members, coping with their own cognitive dissonance by clinging to a justification.

“Dissonance is uncomfortable and we are motivated to reduce it,” Ms. Tavris said.

When we apologize for being wrong, we have to accept this dissonance, and that is unpleasant. On the other hand, research has shown that it can feel good to stick to our guns.

Psychiatry’s “Goldwater Rule” has never met a test like Donald Trump

Brian Resnick
Vox.com
Originally published May 25, 2017

Here is an excerpt:

Some psychiatrists are saying it’s time to rethink this core ethical guideline. The rule, they say, is acting like a gag order, preventing qualified psychiatrists from giving the public important perspective on the mental health of a president whose behavior is out of step with any other president in history.

“The public has a right to medical and psychiatric knowledge about its leaders — at least in a democracy,” Nassir Ghaemi, a Tufts University psychiatrist, recently argued at an APA conference. “Why can’t we have a reasoned scientific discussion on this matter? Why do we just have complete censorship?”

The controversy is sure to rage on, as many psychiatrists stand by the professional precedent. The rule itself has even been expanded recently. But just the existence of the debate is an incredible moment not only in the field of psychiatry but in American politics. It’s not just armchair psychiatrists who are concerned about Trump’s mental health — some of the real ones are even willing to rethink their professional ethics because of it.

The article is here.

Monday, June 12, 2017

New bill requires annual ethics training for lawmakers

Pete Kasperowicz
The Washington Examiner
Originally posted May 26, 2017

Members of the House would have to undergo mandated annual ethics training under a new bill offered by Reps. David Cicilline, D-R.I., and Dave Trott, R-Mich.

The two lawmakers said senators are already taking "ongoing" ethics classes, and House staffers are required to undergo training each year. But House lawmakers themselves are exempt.

"Elected officials should always be held to the highest standards of conduct," Cicilline said Thursday. "That's why it's absurd that members of the U.S. House do not have to complete annual ethics training. We need to close this loophole now."

Trott said his constituents believe lawmakers are above the law, and said his bill would help address that complaint.

"No one is above the law, and members of Congress must live by the laws they create," he said.

The article is here.

Views of US Moral Values Slip to Seven-Year Lows

Gallup
Originally posted May 22, 2017

Americans' ratings of U.S. moral values, consistently negative through the years, have slipped to their lowest point in seven years. More than four in five (81%) now rate the state of moral values in the U.S. as only fair or poor.

Since Gallup first asked in 2002 whether the nation's moral values were getting better or getting worse, the percentage saying worse has always been well above the majority level, ranging from a low of 64% in November 2004 to a high of 82% in May 2007. Over the past six years, it has stayed within a five-point range, reaching a low of 72% in 2013 and 2015 before climbing to this year's high of 77%.

Gallup's question about the current state of moral values getting better or worse has been asked over the same 16-year span as the question about the overall state of moral values. The combined percentage saying moral values are only fair or poor through the years has generally aligned with views about moral values getting worse.

The article is here.

Sunday, June 11, 2017

Beyond Googling: The Ethics of Using Patients' Electronic Footprints in Psychiatric Practice

Carl Fisher and Paul Appelbaum
Harvard Review of Psychiatry

Abstract

Electronic communications are an increasingly important part of people's lives, and much information is accessible through such means. Anecdotal clinical reports indicate that mental health professionals are beginning to use information from their patients' electronic activities in treatment and that their data-gathering practices have gone far beyond simply searching for patients online. Both academic and private sector researchers are developing mental health applications to collect patient information for clinical purposes. Professional societies and commentators have provided minimal guidance, however, about best practices for obtaining or using information from electronic communications or other online activities. This article reviews the clinical and ethical issues regarding use of patients' electronic activities, primarily focusing on situations in which patients share information with clinicians voluntarily. We discuss the potential uses of mental health patients' electronic footprints for therapeutic purposes, and consider both the potential benefits and the drawbacks and risks. Whether clinicians decide to use such information in treating any particular patient-and if so, the nature and scope of its use-requires case-by-case analysis. But it is reasonable to assume that clinicians, depending on their circumstances and goals, will encounter circumstances in which patients' electronic activities will be relevant to, and useful in, treatment.

The article is here.

Saturday, June 10, 2017

Feds probing psychiatric hospitals for locking in patients to boost profits

Beth Mole
Ars Technica
Originally published May 24, 2017

At least three US federal agencies are now investigating Universal Health Services over allegations that its psychiatric hospitals keep patients longer than needed in order to milk insurance companies, Buzzfeed News reports.

According to several sources, the UHS' chain of psychiatric facilities—the largest in the country—will delay patients' discharge dates until the day insurance coverage runs out, regardless of the need of the patient. Because the hospitals are reimbursed per day, the practice extracts the maximum amount of money from insurance companies. It also can be devastating to patients, who are needlessly kept from returning to their jobs and families. To cover up the scheme, medical notes are sometimes altered and doctors come up with excuses, such as medication changes, sources allege. Employees say they repeatedly hear the phrase: “don’t leave days on the table.”

The Department of Health and Human Services has been investigating UHS for several years, as Buzzfeed has previously reported. UHS, a $12 billion company, gets a third of its revenue from government insurance providers. In 2013, HHS issued subpoenas to 10 UHS psychiatric hospitals.

But now it seems the Department of Defense and the FBI have also gotten involved.

The article is here.

How Gullible Are We? A Review of the Evidence From Psychology and Social Science.

Hugo Mercier
Review of General Psychology, May 18 , 2017

Abstract

A long tradition of scholarship, from ancient Greece to Marxism or some contemporary social psychology, portrays humans as strongly gullible—wont to accept harmful messages by being unduly deferent. However, if humans are reasonably well adapted, they should not be strongly gullible: they should be vigilant toward communicated information. Evidence from experimental psychology reveals that humans are equipped with well-functioning mechanisms of epistemic vigilance. They check the plausibility of messages against their background beliefs, calibrate their trust as a function of the source’s competence and benevolence, and critically evaluate arguments offered to them. Even if humans are equipped with well-functioning mechanisms of epistemic vigilance, an adaptive lag might render them gullible in the face of new challenges, from clever marketing to omnipresent propaganda. I review evidence from different cultural domains often taken as proof of strong gullibility: religion, demagoguery, propaganda, political campaigns, advertising, erroneous medical beliefs, and rumors. Converging evidence reveals that communication is much less influential than often believed—that religious proselytizing, propaganda, advertising, and so forth are generally not very effective at changing people’s minds. Beliefs that lead to costly behavior are even less likely to be accepted. Finally, it is also argued that most cases of acceptance of misguided communicated information do not stem from undue deference, but from a fit between the communicated information and the audience’s preexisting beliefs.

The article is here.

Friday, June 9, 2017

Sapolsky on the biology of human evil

Sean Illing
Vox.com
Originally posted May 23, 2017

Here is an excerpt:

The key question of the book — why are we the way we are? — is explored from a multitude of angles, and the narrative structure helps guide the reader. For instance, Sapolsky begins by examining a person’s behavior in the moment (why we recoil or rejoice or respond aggressively to immediate stimuli) and then zooms backward in time, following the chain of antecedent causes back to our evolutionary roots.

For every action, Sapolsky shows, there are several layers of causal significance: There’s a neurobiological cause and a hormonal cause and a chemical cause and a genetic cause, and, of course, there are always environmental and historical factors. He synthesizes the research across these disciplines into a coherent, readable whole.

In this interview, I talk with Sapolsky about the paradoxes of human nature, why we’re capable of both good and evil, whether free will exists, and why symbols have become so central to human life.

The article and interview are here.

Are practitioners becoming more ethical?

By Rebecca Clay
The Monitor on Psychology
May 2017, Vol 48, No. 5
Print version: page 50

The results of research presented at APA's 2016 Annual Convention suggest that today's practitioners are less likely to commit such ethical violations as kissing a client, altering diagnoses to meet insurance criteria and treating homosexuality as pathological than their counterparts 30 years ago.

The research, conducted by psychologists Rebecca Schwartz-Mette, PhD, of the University of Maine at Orono and David S. Shen-Miller, PhD, of Bastyr University, replicated a 1987 study by Kenneth Pope, PhD, and colleagues published in the American Psychologist. Schwartz-Mette and Shen-Miller asked 453 practicing psychologists the same 83 questions posed to practitioners three decades ago.

The items included clear ethical violations, such as having sex with a client or supervisee. But they also included behaviors that could reasonably be construed as ethical, such as breaking confidentiality to report child abuse; behaviors that are ambiguous or not specifically prohibited, such as lending money to a client; and even some that don't seem controversial, such as shaking hands with a client. "Interestingly, 75 percent of the items from the Pope study were rated as less ethical in our study, suggesting a more general trend toward conservativism in multiple areas," says Schwartz-Mette.

The article is here.

Thursday, June 8, 2017

Shining Light on Conflicts of Interest

Craig Klugman
The American Journal of Bioethics 
Volume 17, 2017 - Issue 6

Chimonas, DeVito and Rothman (2017) offer a descriptive target article that examines physicians' knowledge of and reaction to the Sunshine Act's Open Payments Database. This program is a federal computer repository of all payments and goods with a worth over $10 made from pharmaceutical companies and device manufacturers to physicians. Created under the 2010 Affordable Care Act, the goal of this database is to make the relationships between physicians and the medical drug/device industry more transparent. Such transparency is often touted as a solution to financial conflicts of interest (COI). A COI occurs when a person owes featly to more than one party. For example, physicians have fiduciary duties toward patients. At the same time, when physicians receive gifts or benefits from a pharmaceutical company, they are more likely to prescribe that company's products (Spurling et al. 2010). The gift creates a sense of a moral obligation toward the company. These two interests can be (but may not be) in conflict. Such arrangements can undermine a patient's trust with his/her physician, and more broadly, the public's trust of medicine.

(cut)

The idea is that if people are told about the conflict, then they can judge for themselves whether the provider is compromised and whether they wish to receive care from this person. The database exists with this intent—that transparency alone is enough. What is a patient to do with this information? Should patients avoid physicians who have conflicts? The decision is left in the patient's hands. Back in 2014, the Pharmaceutical Research and Manufacturers of America lobbying group expressed concern that the public would not understand the context of any payments or gifts to physicians (Castellani 2014).

The article is here.

The AI Cargo Cult: The Myth of Superhuman AI

Kevin Kelly
Backchannel.com
Originally posted April 25, 2017

Here is an excerpt:

The most common misconception about artificial intelligence begins with the common misconception about natural intelligence. This misconception is that intelligence is a single dimension. Most technical people tend to graph intelligence the way Nick Bostrom does in his book, Superintelligence — as a literal, single-dimension, linear graph of increasing amplitude. At one end is the low intelligence of, say, a small animal; at the other end is the high intelligence, of, say, a genius—almost as if intelligence were a sound level in decibels. Of course, it is then very easy to imagine the extension so that the loudness of intelligence continues to grow, eventually to exceed our own high intelligence and become a super-loud intelligence — a roar! — way beyond us, and maybe even off the chart.

This model is topologically equivalent to a ladder, so that each rung of intelligence is a step higher than the one before. Inferior animals are situated on lower rungs below us, while higher-level intelligence AIs will inevitably overstep us onto higher rungs. Time scales of when it happens are not important; what is important is the ranking—the metric of increasing intelligence.

The problem with this model is that it is mythical, like the ladder of evolution. The pre-Darwinian view of the natural world supposed a ladder of being, with inferior animals residing on rungs below human. Even post-Darwin, a very common notion is the “ladder” of evolution, with fish evolving into reptiles, then up a step into mammals, up into primates, into humans, each one a little more evolved (and of course smarter) than the one before it. So the ladder of intelligence parallels the ladder of existence. But both of these models supply a thoroughly unscientific view.

The article is here.

Wednesday, June 7, 2017

What do White House Rules Mean if They Can Be Circumvented?

Sheelah Kolhatkar
The New Yorker
Originally posted June 6, 2017

Here is an excerpt:

Each Administration establishes its own ethics rules, often by executive order, which go beyond ethics laws codified by Congress (those laws require such things as financial-disclosure forms from government employees, the divestiture of assets if they pose conflicts, and recusal from government matters if they intersect with personal business). While the rules established by law are hard and fast, officials can be granted waivers from the looser executive-order rules. The Obama Administration granted a handful of such waivers over the course of its eight years. What’s startling with the Trump White House is just how many waivers have been issued so early in Trump’s term—more than a dozen were disclosed last week, with another twenty-four expected this week, according to a report in the Wall Street Journal—as well as the Administration’s attempt to keep them secret, all while seeming to flout the laws that dictate how the whole system should work.

The ethics waivers made public last week apply to numerous officials who are now working on matters affecting the same companies and industries they represented before joining the Administration. The documents were only released after the Office of Government Ethics pressed the Trump Administration to make them public, which is how they have been handled in the past; the White House initially refused, attempting to argue that the ethics office lacked the standing to even ask for them. After a struggle, the Administration relented, but many of the waivers it released were missing critical information, such as the dates when they were issued. One waiver in particular, which appears to apply to Trump’s chief strategist, Stephen Bannon, without specifically naming him, grants Administration staff permission to communicate with news organizations where they might have formerly worked (Breitbart News, in Bannon’s case). The Bannon-oriented waiver, issued by the “Counsel to the President,” contains the line “I am issuing this memorandum retroactive to January 20, 2017.”

Walter Shaub, the head of the Office of Government Ethics, quickly responded that there is no such thing as a “retroactive” ethics waiver. Shaub told the Times, “If you need a retroactive waiver, you have violated a rule.”

The article is here.

On the cognitive (neuro)science of moral cognition: utilitarianism, deontology and the ‘fragmentation of value’

Alejandro Rosas
Working Paper: May 2017

Abstract

Scientific explanations of human higher capacities, traditionally denied to other animals, attract the attention of both philosophers and other workers in the humanities. They are often viewed with suspicion and skepticism. In this paper I critically examine the dual-process theory of moral judgment proposed by Greene and collaborators and the normative consequences drawn from that theory. I believe normative consequences are warranted, in principle, but I propose an alternative dual-process model of moral cognition that leads to a different normative consequence, which I dub ‘the fragmentation of value’. In the alternative model, the neat overlap between the deontological/utilitarian divide and the intuitive/reflective divide is abandoned. Instead, we have both utilitarian and deontological intuitions, equally fundamental and partially in tension. Cognitive control is sometimes engaged during a conflict between intuitions. When it is engaged, the result of control is not always utilitarian; sometimes it is deontological. I describe in some detail how this version is consistent with evidence reported by many studies, and what could be done to find more evidence to support it.

The working paper is here.

Tuesday, June 6, 2017

Some Social Scientists Are Tired of Asking for Permission

Kate Murphy
The New York Times
Originally published May 22, 2017

Who gets to decide whether the experimental protocol — what subjects are asked to do and disclose — is appropriate and ethical? That question has been roiling the academic community since the Department of Health and Human Services’s Office for Human Research Protections revised its rules in January.

The revision exempts from oversight studies involving “benign behavioral interventions.” This was welcome news to economists, psychologists and sociologists who have long complained that they need not receive as much scrutiny as, say, a medical researcher.

The change received little notice until a March opinion article in The Chronicle of Higher Education went viral. The authors of the article, a professor of human development and a professor of psychology, interpreted the revision as a license to conduct research without submitting it for approval by an institutional review board.

That is, social science researchers ought to be able to decide on their own whether or not their studies are harmful to human subjects.

The Federal Policy for the Protection of Human Subjects (known as the Common Rule) was published in 1991 after a long history of exploitation of human subjects in federally funded research — notably, the Tuskegee syphilis study and a series of radiation experiments that took place over three decades after World War II.

The remedial policy mandated that all institutions, academic or otherwise, establish a review board to ensure that federally funded researchers conducted ethical studies.

The article is here.

Research and clinical issues in trauma and dissociation: Ethical and logical fallacies, myths, misreports, and misrepresentations

Jenny Ann Rydberg
European Journal of Trauma & Dissociation
Available online 23 April 2017

Introduction

The creation of a new journal on trauma and dissociation is an opportunity to take stock of existing models and theories in order to distinguish mythical, and sometimes dangerous, stories from established facts.

Objective

To describe the professional, scientific, clinical, and ethical strategies and fallacies that must be envisaged when considering reports, claims, and recommendations relevant to trauma and dissociation.

Method

After a general overview, two current debates in the field, the stabilisation controversy and the false/recovered memory controversy, are examined in detail to illustrate such issues.

Results

Misrepresentations, misreports, ethical and logical fallacies are frequent in the general and scientific literature regarding the stabilisation and false/recovered memory controversies.

Conclusion

A call is made for researchers and clinicians to strengthen their knowledge of and ability to identify such cognitive, logical, and ethical manoeuvres both in scientific literature and general media reports.

The article is here.

Monday, June 5, 2017

AI May Hold the Key to Stopping Suicide

Bahar Gholipour
NBC News
Originally posted May 23, 2017

Here is an excerpt:

So far the results are promising. Using AI, Ribeiro and her colleagues were able to predict whether someone would attempt suicide within the next two years at about 80 percent accuracy, and within the next week at 92 percent accuracy. Their findings were recently reported in the journal Clinical Psychological Science.

This high level of accuracy was possible because of machine learning, as researchers trained an algorithm by feeding it anonymous health records from 3,200 people who had attempted suicide. The algorithm learns patterns through examining combinations of factors that lead to suicide, from medication use to the number of ER visits over many years. Bizarre factors may pop up as related to suicide, such as acetaminophen use a year prior to an attempt, but that doesn't mean taking acetaminophen can be isolated as a risk factor for suicide.

"As humans, we want to understand what to look for," Ribeiro says. "But this is like asking what's the most important brush stroke in a painting."

With funding from the Department of Defense, Ribeiro aims to create a tool that can be used in clinics and emergency rooms to better find and help high-risk individuals.

The article is here.

Can Psychologists Tell Us Anything About Morality?

John M. Doris, Edouard Machery and Stephen Stich
Philosopher's Magazine
Originally published May 10, 2017

Here is an excerpt:

Some psychologists accept morally dubious employment. Some psychologists cheat. Some psychology experiments don't replicate. Some. But the inference from some to all is at best invalid, and at worst, invective. There's good psychology and bad psychology, just like there's good and bad everything else, and tarring the entire discipline with the broadest of brushes won’t help us sort that out. It is no more illuminating to disregard the work of psychologists en masse on the grounds that a tiny minority of the American Psychological Association, a very large and diverse professional association, were involved with the Bush administration’s program of torture than it would to disregard the writings of all Nietzsche scholars because some Nazis were Nietzsche enthusiasts! To be sure, there are serious questions about which intellectual disciplines, and which intellectuals, are accorded cultural capital, and why. But we are unlikely to find serious answers by means of innuendo and polemic.

Could there be more substantive reasons to exclude scientific psychology from the study of ethics? The most serious – if ultimately unsuccessful – objection proceeds in the language of “normativity”. For philosophers, normative statements are prescriptive, or “oughty”: in contrast to descriptive statements, which aspire only to say how the world is, normative statements say what ought be done about it. And, some have argued, never the twain shall meet.

While philosophers haven’t enjoyed enviable success in adducing lawlike generalisations, one such achievement is Hume’s Law (we told you the issues are old ones), which prohibits deriving normative statements from descriptive statements. As the slogan goes, “is doesn’t imply ought.”

Many philosophers, ourselves included, suppose that Hume is on to something. There probably exists some sort of “inferential barrier” between the is and the ought, such that there are no strict logical entailments from the descriptive to the normative.

The article is here.

Sunday, June 4, 2017

Physicians, Firearms, and Free Speech

Wendy E. Parmet, Jason A. Smith, and Matthew Miller
N Engl J Med 2017; 376:1901-1903
May 18, 2017

Here is an excerpt:

The majority’s well-reasoned decision, in fact, does just that. By relying on heightened rather than strict scrutiny, the majority affirmed that laws regulating physician speech must be designed to enhance rather than harm patient safety. The majority took this mandate seriously and required the state to show some meaningful evidence that the regulation was apt to serve the state’s interest in protecting patients.

The state could not do so for two reasons. First, the decision to keep a gun in the home substantially increases the risk of death for all household members, especially the risk of death by suicide, and particularly so when guns are stored loaded and unlocked, as they are in millions of homes where children live.  Second, the majority of U.S. adults who live in homes with guns are unaware of the heightened risk posed by bringing guns into a home.  Indeed, by providing accurate information about the risks created by easy access to firearms, as well as ways to modify that risk (e.g., by storing guns unloaded and locked up, separate from ammunition), a physician’s counseling can not only enhance a patient’s capacity for self-determination, but also save lives.

Given the right to provide such counsel, professional norms recognize the responsibility to do so. Fulfilling this obligation, however, may not be easy, since the chief impediments to doing so — and to doing so effectively — are not and never have been legal barriers. Indeed, the court’s welcome ruling does not ensure that most clinicians will honor this hard-won victory by exercising their First Amendment rights.

The article is here.

Saturday, June 3, 2017

Trump Exempts Entire Senior Staff From White House Ethics Rules

Lachlan Markay
The Daily Beast
Originally published May 31, 2017

Here is an excerpt:

Andrew Olmem, another White House economist and a former lobbyist for a host of large financial services and insurance firms, will be free to work with former clients on specific issues affecting bank capital requirements, financial regulation of insurers, and the Puerto Rican debt crisis, all issues on which he has recently lobbied.

Those officials have been given freer rein to advance their former clients’ financial interests, but ethics rules have also been waived for every other “commissioned officer”—staffers who report directly to the president—in the White House who has worked for a political group in the past two years.

That will allow a number of White House staffers to collaborate with pro-Trump advocacy operations. The West Wing is stacked with officials who have made significant sums working, consulting for, or representing high-profile political organizations, including networks of groups financed by the Trump-backing Mercer family and the libertarian Koch family.

Conway herself consulted for more than 50 political, policy, and advocacy organizations last year, according to a White House financial disclosure statement.

The article is here.

Friday, June 2, 2017

The meaning of life in a world without work

Yuval Noah Harari
The Guardian
Originally posted May 8, 2017

Most jobs that exist today might disappear within decades. As artificial intelligence outperforms humans in more and more tasks, it will replace humans in more and more jobs. Many new professions are likely to appear: virtual-world designers, for example. But such professions will probably require more creativity and flexibility, and it is unclear whether 40-year-old unemployed taxi drivers or insurance agents will be able to reinvent themselves as virtual-world designers (try to imagine a virtual world created by an insurance agent!). And even if the ex-insurance agent somehow makes the transition into a virtual-world designer, the pace of progress is such that within another decade he might have to reinvent himself yet again.

The crucial problem isn’t creating new jobs. The crucial problem is creating new jobs that humans perform better than algorithms. Consequently, by 2050 a new class of people might emerge – the useless class. People who are not just unemployed, but unemployable.

The article is here.

The Theory of Dyadic Morality: Reinventing Moral Judgment by Redefining Harm

Chelsea Schein, Kurt Gray
Personality and Social Psychology Review 
First Published May 14, 2017

Abstract

The nature of harm—and therefore moral judgment—may be misunderstood. Rather than an objective matter of reason, we argue that harm should be redefined as an intuitively perceived continuum. This redefinition provides a new understanding of moral content and mechanism—the constructionist Theory of Dyadic Morality (TDM). TDM suggests that acts are condemned proportional to three elements: norm violations, negative affect, and—importantly—perceived harm. This harm is dyadic, involving an intentional agent causing damage to a vulnerable patient (A→P). TDM predicts causal links both from harm to immorality (dyadic comparison) and from immorality to harm (dyadic completion). Together, these two processes make the “dyadic loop,” explaining moral acquisition and polarization. TDM argues against intuitive harmless wrongs and modular “foundations,” but embraces moral pluralism through varieties of values and the flexibility of perceived harm. Dyadic morality impacts understandings of moral character, moral emotion, and political/cultural differences, and provides research guidelines for moral psychology.

The article is here.

Thursday, June 1, 2017

Nudges in a post-truth world

Neil Levy
Journal of Medical Ethics 
Published Online First: 19 May 2017

Abstract

Nudges—policy proposals informed by work in behavioural economics and psychology that are designed to lead to better decision-making or better behaviour—are controversial. Critics allege that they bypass our deliberative capacities, thereby undermining autonomy and responsible agency. In this paper, I identify a kind of nudge I call a nudge to reason, which make us more responsive to genuine evidence. I argue that at least some nudges to reason do not bypass our deliberative capacities. Instead, use of these nudges should be seen as appeals to mechanisms partially constitutive of these capacities, and therefore as benign (so far as autonomy and responsible agency are concerned). I sketch some concrete proposals for nudges to reason which are especially important given the apparent widespread resistance to evidence seen in recent political events.

The article is here.

There is no liberal right to sex with students

Maya J. Goldenberg, Karen Houle, Monique Deveaux, Karyn L. Freedman, & Patricia Sheridan
The Times Higher Education
Originally posted May 4, 2017

There is a long and distinguished history of conceptualising liberal democracy in terms of basic rights to which, all other things being equal, everyone is entitled. Sexual freedom is rightly counted among these. But should this right apply where one person is in a position of power and authority over the other? Doctors are sanctioned if they have sex with their patients, as are lawyers who sleep with their clients. Should sexual relationships between professors and students in the same department also be off limits?

Neil McArthur thinks not. As Times Higher Education has reported, the associate professor of philosophy at the University of Manitoba, in Canada, recently published a paper criticising the spread of bans on such relationships. But we believe that his argument is flawed.

The article is here.