"Living a fully ethical life involves doing the most good we can. - Peter Singer
"Common sense is not so common." - Voltaire

Wednesday, November 30, 2016

Human brain is predisposed to negative stereotypes, new study suggests

Hannah Devlin
The Guardian
Originally posted November 1, 2016

The human brain is predisposed to learn negative stereotypes, according to research that offers clues as to how prejudice emerges and spreads through society.

The study found that the brain responds more strongly to information about groups who are portrayed unfavourably, adding weight to the view that the negative depiction of ethnic or religious minorities in the media can fuel racial bias.

Hugo Spiers, a neuroscientist at University College London, who led the research, said: “The newspapers are filled with ghastly things people do ... You’re getting all these news stories and the negative ones stand out. When you look at Islam, for example, there’s so many more negative stories than positive ones and that will build up over time.”

The article is here.

Can Robots Make Moral Decisions? Should They?

Joelle Renstrom

The Daily Beast
Originally published November 12, 2016

Here is an excerpt:

Whether it’s possible to program a robot with safeguards such as Asimov’s laws is debatable. A word such as “harm” is vague (what about emotional harm? Is replacing a human employee harm?), and abstract concepts present coding problems. The robots in Asimov’s fiction expose complications and loopholes in the three laws, and even when the laws work, robots still have to assess situations.

Assessing situations can be complicated. A robot has to identify the players, conditions, and possible outcomes for various scenarios. It’s doubtful than an algorithm can do that—at least, not without some undesirable results. A roboticist at the Bristol Robotics Laboratory programmed a robot to save human proxies called “H-bots” from danger. When one H-bot headed for danger, the robot successfully pushed it out of the way. But when two H-bots became imperiled, the robot choked 42 percent of the time, unable to decide which to save and letting them both “die.” The experiment highlights the importance of morality: without it, how can a robot decide whom to save or what’s best for humanity, especially if it can’t calculate survival odds?

The article is here.

Tuesday, November 29, 2016

Everyone Thinks They’re More Moral Than Everyone Else

By Cari Romm
New York Magazine - The Science of Us
Originally posted November 15, 2016

There’s been a lot of talk over the past week about the “filter bubble” — the ideological cocoon that each of us inhabits, blinding us to opposing views. As my colleague Drake wrote the day after the election, the filter bubble is why so many people were so blindsided by Donald Trump’s win: They only saw, and only read, stories assuming that it wouldn’t happen.

Our filter bubbles are defined by the people and ideas we choose to surround ourselves with, but each of us also lives in a one-person bubble of sorts, viewing the world through our own distorted sense of self. The way we view ourselves in relation to others is a constant tug-of-war between two opposing forces: On one end of the spectrum is something called illusory superiority, a psychological quirk in which we tend to assume that we’re better than average — past research has found it to be true in people estimating their own driving skills, parents’ perceived ability to catch their kid in a lie, even cancer patients’ estimates of their own prognoses. And on the other end of the spectrum, there’s “social projection,” or the assumption that other people share your abilities or beliefs.

Why does imprisoned psychologist still have license to practice?

Charles Keeshan and Susan Sarkauskas
Chicago Daily Herald
Originally published November 11, 2016

Here is an excerpt:

Federal prosecutors said Rinaldi submitted phony bills to Medicare for about $1.1 million over four years, collecting at least $447,155. In nearly a dozen instances, they said, she submitted claims indicating she had provided between 35 and 42 hours of therapy in a single day. In others, she submitted claims stating she had provided care to Chicago-area patients when she was actually in San Diego or Las Vegas.

The article is here.

Monday, November 28, 2016

CRISPR gene-editing tested in a person for the first time

David Cyranoski
Originally published November 16, 2016

A Chinese group has become the first to inject a person with cells that contain genes edited using the revolutionary CRISPR–Cas9 technique.

On 28 October, a team led by oncologist Lu You at Sichuan University in Chengdu delivered the modified cells into a patient with aggressive lung cancer as part of a clinical trial at the West China Hospital, also in Chengdu.

Earlier clinical trials using cells edited with a different technique have excited clinicians. The introduction of CRISPR, which is simpler and more efficient than other techniques, will probably accelerate the race to get gene-edited cells into the clinic across the world, says Carl June, who specializes in immunotherapy at the University of Pennsylvania in Philadelphia and led one of the earlier studies.

The article is here.

Studying ethics, 'Star Trek' style, at Drake

Daniel P. Finney
The Des Moines Register
Originally posted November 10, 2016

Here is an excerpt:

Sure, the discussion was about ethics of the fictional universe of “Star Trek.” But fiction, like all art, reflects the human condition.

The issue Capt. Sisko wrestled with had parallels to the real world.

Some historians hold the controversial assertion that President Franklin D. Roosevelt knew of the impending attack on Pearl Harbor in 1941 but allowed it to happen to bring the United States into World War II, a move the public opposed before the attack.

In more recent times, former President George W. Bush’s administration used faulty intelligence suggesting Iraq possessed weapons of mass destruction to justify a war that many believed would stabilize the increasingly sectarian Middle East. It did not.

The article is here.

Sunday, November 27, 2016

Approach-Induced Biases in Human Information Sampling

Laurence T. Hunt and others
PLOS Biology
Published: November 10, 2016


Information sampling is often biased towards seeking evidence that confirms one’s prior beliefs. Despite such biases being a pervasive feature of human behavior, their underlying causes remain unclear. Many accounts of these biases appeal to limitations of human hypothesis testing and cognition, de facto evoking notions of bounded rationality, but neglect more basic aspects of behavioral control. Here, we investigated a potential role for Pavlovian approach in biasing which information humans will choose to sample. We collected a large novel dataset from 32,445 human subjects, making over 3 million decisions, who played a gambling task designed to measure the latent causes and extent of information-sampling biases. We identified three novel approach-related biases, formalized by comparing subject behavior to a dynamic programming model of optimal information gathering. These biases reflected the amount of information sampled (“positive evidence approach”), the selection of which information to sample (“sampling the favorite”), and the interaction between information sampling and subsequent choices (“rejecting unsampled options”). The prevalence of all three biases was related to a Pavlovian approach-avoid parameter quantified within an entirely independent economic decision task. Our large dataset also revealed that individual differences in the amount of information gathered are a stable trait across multiple gameplays and can be related to demographic measures, including age and educational attainment. As well as revealing limitations in cognitive processing, our findings suggest information sampling biases reflect the expression of primitive, yet potentially ecologically adaptive, behavioral repertoires. One such behavior is sampling from options that will eventually be chosen, even when other sources of information are more pertinent for guiding future action.

The article is here.

Saturday, November 26, 2016

Harvard scientists think they've pinpointed the physical source of consciousness

Fiona McDonald
Originally posted 8 November 2016

Here is an excerpt:

Now the Harvard team has identified not only the specific brainstem region linked to arousal, but also two cortex regions, that all appear to work together to form consciousness.

To figure this out, the team analysed 36 patients in hospital with brainstem lesions - 12 of them were in a coma (unconscious) and 24 were defined as being conscious.

The researchers then mapped their brainstems to figure out if there was one particular region that could explain why some patients had maintained consciousness despite their injuries, while others had become comatose.

What they found was one small area of the brainstem - known as the rostral dorsolateral pontine tegmentum - that was significantly associated with coma. Ten out of the 12 unconscious patients had damage in this area, while just one out of the 24 conscious patients did.

The article is here.

What is data ethics?

Luciano Floridi and Mariarosaria Taddeo
Philosophical Transactions Royal Society A

This theme issue has the founding ambition of landscaping data ethics as a new branch of ethics that studies and evaluates moral problems related to data (including generation, recording, curation, processing, dissemination, sharing and use), algorithms (including artificial intelligence, artificial agents, machine learning and robots) and corresponding practices (including responsible innovation, programming, hacking and professional codes), in order to formulate and support morally good solutions (e.g. right conducts or right values).  Data ethics builds on the foundation provided by computer and information ethics but, at the sametime, it refines the approach endorsed so far in this research field, by shifting the level of abstraction of ethical enquiries, from being information-centric to being data-centric. This shift brings into focus the different moral dimensions of all kinds of data, even data that never translate directly into information but can be used to support actions or generate behaviours, for example. It highlights the need for ethical analyses to concentrate on the content and nature of computational operations—the interactions among hardware, software and data—rather than on the variety of digital technologies that enable them. And it emphasizes the complexity of the ethical challenges posed by data science. Because of such complexity, data ethics should be developed from the start as a macroethics, that is, as an overall framework that avoids narrow, ad hoc approaches and addresses the ethical impact and implications of data science and its applications within a consistent, holistic and inclusive framework. Only as a macroethics will data ethics provide solutions that can maximize the value of data science for our societies, for all of us and for our environments.This article is part of the themed issue ‘The ethical impact of data science’.

The article is here.

Friday, November 25, 2016

A New Spin on the Quantum Brain

By Jennifer Ouellette
Quanta Magazine
November 2, 2016

The mere mention of “quantum consciousness” makes most physicists cringe, as the phrase seems to evoke the vague, insipid musings of a New Age guru. But if a new hypothesis proves to be correct, quantum effects might indeed play some role in human cognition. Matthew Fisher, a physicist at the University of California, Santa Barbara, raised eyebrows late last year when he published a paper in Annals of Physics proposing that the nuclear spins of phosphorus atoms could serve as rudimentary “qubits” in the brain — which would essentially enable the brain to function like a quantum computer.

As recently as 10 years ago, Fisher’s hypothesis would have been dismissed by many as nonsense. Physicists have been burned by this sort of thing before, most notably in 1989, when Roger Penrose proposed that mysterious protein structures called “microtubules” played a role in human consciousness by exploiting quantum effects. Few researchers believe such a hypothesis plausible. Patricia Churchland, a neurophilosopher at the University of California, San Diego, memorably opined that one might as well invoke “pixie dust in the synapses” to explain human cognition.

The article is here.

Thursday, November 24, 2016

Middle School Suicides Reach An All-Time High

Elissa Nadworny
Originally posted November 4, 2016

There's a perception that children don't kill themselves, but that's just not true. A new report shows that, for the first time, suicide rates for U.S. middle school students have surpassed the rate of death by car crashes.

The suicide rate among youngsters ages 10 to 14 has been steadily rising, and doubled in the U.S. from 2007 to 2014, according to the Centers for Disease Control and Prevention. In 2014, 425 young people 10 to 14 years of age died by suicide.

The article and the video are here.

National Suicide Hotline: 1-800-273-8255

Wednesday, November 23, 2016

Increase in US Suicide Rates and the Critical Decline in Psychiatric Beds

Tarun Bastiampillai, Steven S. Sharfstein, & Stephen Allison
JAMA. Published online November 3, 2016

The closure of most US public mental hospital beds and the reduction in acute general psychiatric beds over recent decades have led to a crisis, as overall inpatient capacity has not kept pace with the needs of patients with psychiatric disorders. Currently, state-funded psychiatric beds are almost entirely forensic (ie, allocated to people within the criminal justice system who have been charged or convicted). Very limited access to nonforensic psychiatric inpatient care is contributing to the risks of violence, incarceration, homelessness, premature mortality, and suicide among patients with psychiatric disorders. In particular, a safe minimum number of psychiatric beds is required to respond to suicide risk given the well-established and unchanging prevalence of mental illness, relapse rates, treatment resistance, nonadherence with treatment, and presentations after acute social crisis. Very limited access to inpatient care is likely a contributing factor for the increasing US suicide rate. In 2014, suicide was the second-leading cause of death for people aged between 10 and 34 years and the tenth-leading cause of death for all age groups, with firearm trauma being the leading method.

Currently, the United States has a relatively low 22 psychiatric beds per 100 000 population compared with the Organisation for Economic Cooperation and Development (OECD) average of 71 beds per 100 000 population. Only 4 of the 35 OECD countries (Italy, Chile, Turkey, and Mexico) have fewer psychiatric beds per 100 000 population than the United States. Although European health systems are very different from the US health system, they provide a useful comparison. For instance, Germany, Switzerland, and France have 127, 91, and 87 psychiatric beds per 100 000 population, respectively.

The article is here.

Moral Distress in Physicians and Nurses: Impact on Professional Quality of Life and Turnover.

C. L. Austin, R. Saylor, and P. J. Finley
Psychological Trauma: Theory, Research, Practice, and Policy, 2016


Objective: The purpose of this study was to investigate moral distress (MD) and turnover intent as related to professional quality of life in physicians and nurses at a tertiary care hospital.

Method: Health care providers from a variety of hospital departments anonymously completed 2 validated questionnaires (Moral Distress Scale–Revised and Professional Quality of Life Scale). Compassion fatigue (as measured by secondary traumatic stress [STS] and burnout [BRN]) and compassion satisfaction are subscales which make up one’s professional quality of life. Relationships between these constructs and clinicians’ years in health care, critical care patient load, and professional discipline were explored.

Results: The findings (n = 329) demonstrated significant correlations between STS, BRN, and MD. Scores associated with intentions to leave or stay in a position were indicative of high verses low MD. We report highest scoring situations of MD as well as when physicians and nurses demonstrate to be most at risk for STS, BRN and MD. Both physicians and nurses identified the events contributing to the highest level of MD as being compelled to provide care that seems ineffective and working with a critical care patient load >50%.

Conclusion: The results from this study of physicians and nurses suggest that the presence of MD significantly impacts turnover intent and professional quality of life. Therefore implementation of emotional wellness activities (e.g., empowerment, opportunity for open dialog regarding ethical dilemmas, policy making involvement) coupled with ongoing monitoring and routine assessment of these maladaptive characteristics is warranted.

The article is here.

Tuesday, November 22, 2016

The real problem (of consciousness)

Anil K Seth
Originally posted November 2, 2016

Here is an excerpt:

So what underlies being conscious specifically, as opposed to just being awake? We know it’s not just the number of neurons involved. The cerebellum (the so-called ‘little brain’ hanging off the back of the cortex) has about four times as many neurons as the rest of the brain, but seems barely involved in maintaining conscious level. It’s not even the overall level of neural activity – your brain is almost as active during dreamless sleep as it is during conscious wakefulness. Rather, consciousness seems to depend on how different parts of the brain speak to each other, in specific ways.

A series of studies by the neuroscientist Marcello Massimini at the University of Milan provides powerful evidence for this view. In these studies, the brain is stimulated by brief pulses of energy – using a technique called transcranial magnetic stimulation (TMS) – and its electrical ‘echoes’ are recorded using EEG. In dreamless sleep and general anaesthesia, these echoes are very simple, like the waves generated by throwing a stone into still water. But during conscious states, a typical echo ranges widely over the cortical surface, disappearing and reappearing in complex patterns. Excitingly, we can now quantify the complexity of these echoes by working out how compressible they are, similar to how simple algorithms compress digital photos into JPEG files. The ability to do this represents a first step towards a ‘consciousness-meter’ that is both practically useful and theoretically motivated.

The article is here.

When Disagreement Gets Ugly: Perceptions of Bias and the Escalation of Conflict

Kathleen A. Kennedy and Emily Pronin
Pers Soc Psychol Bull 2008 34: 833


It is almost a truism that disagreement produces conflict. This article suggests that perceptions of bias can drive this relationship. First, these studies show that people perceive those who disagree with them as biased. Second, they show that the conflict-escalating approaches that people take toward those who disagree with them are mediated by people's tendency to perceive those who disagree with them as biased. Third, these studies manipulate the mediator and show that experimental manipulations that prompt people to perceive adversaries as biased lead them to respond more conflictually—and that such responding causes those who engage in it to be viewed as more biased and less worthy of cooperative gestures. In summary, this article provides evidence for a “bias-perception conflict spiral,” whereby people who disagree perceive each other as biased, and those perceptions in turn lead them to take conflict-escalating actions against each other (which in turn engender further perceptions of bias, continuing the spiral).

The article is here.

For those who do marital counseling or work in any adversarial system.

Monday, November 21, 2016

From porkies to whoppers: over time lies may desensitise brain to dishonesty

Hannah Devlin
The Guardian
Originally posted October 24, 2016

Here is an excerpt:

Now scientists have uncovered an explanation for why telling a few porkies has the tendency to spiral out of control. The study suggests that telling small, insignificant lies desensitises the brain to dishonesty, meaning that lying gradually feels more comfortable over time.

Tali Sharot, a neuroscientist at University College London and senior author, said: “Whether it’s evading tax, infidelity, doping in sports, making up data in science or financial fraud, deceivers often recall how small acts of dishonesty snowballed over time and they suddenly found themselves committing quite large crimes.”

Sharot and colleagues suspected that this phenomenon was due to changes in the brain’s response to lying, rather than simply being a case of one lie necessitating another to maintain a story.

The article is here.

A Theory of Hypocrisy

Eric Schwitzgebel
The Splintered Mind blog
Originally posted on October

Here is an excerpt:

Furthermore, if they are especially interested in the issue, violations of those norms might be more salient and visible to them than for the average person. The person who works in the IRS office sees how frequent and easy it is to cheat on one's taxes. The anti-homosexual preacher sees himself in a world full of gays. The environmentalist grumpily notices all the giant SUVs rolling down the road. Due to an increased salience of violations of the norms they most care about, people might tend to overestimate the frequency of the violations of those norms -- and then when they calibrate toward mediocrity, their scale might be skewed toward estimating high rates of violation. This combination of increased salience of unpunished violations plus calibration toward mediocrity might partly explain why hypocritical norm violations are more common than a purely strategic account might suggest.

But I don't think that's enough by itself to explain the phenomenon, since one might still expect people to tend to avoid conspicuous moral advocacy on issues where they know they are average-to-weak; and even if their calibration scale is skewed a bit high, they might hope to pitch their own behavior especially toward the good side on that particular issue -- maybe compensating by allowing themselves more laxity on other issues.

The blog post is here.

Sunday, November 20, 2016

Vignette 35: Initial Telepsychology Session for Free?

Dr. Larry Ellison, a psychologist colleague, contacts you about a marketing plan for his telepsychology services.  He has over 1,000 followers on Twitter and a strong social media presence on Facebook.  His plan is this: He wants to offer one free psychotherapy session to potential patients. He explains he is trying to promote his telepsychology practice and show that psychologists are open, friendly, and willing to help, potentially for free to get treatment started.  The overarching goal is to develop a robust telepsychology practice.

Knowing the rules of his state, Dr. Ellison will make it clear that services are only available in the states that he is licensed.  Dr. Ellison will use a HIPAA-compliant videoconferencing service.  He is looking for general feedback, such as thoughts on a good marketing plan or any ethical concerns.

Upon hearing his plan, do you have any ethical concerns about this marketing plan?

What are some of the practical or potential pitfalls of this plan?

What are some of the overarching ethical principles involved in this decision?

What state laws would you consider to help make this decision?

Saturday, November 19, 2016

Risk Management and You: 9 Most Frequent Violations for Psychologists

Ken Pope and Melba Vasquez
Ethics in Psychotherapy and Counseling: Practical Guide (5th edition)

For U.S. and Canadian psychologists, the 9 most frequent causes among the 5,582 disciplinary actions over the years were (in descending order of frequency):

  1. unprofessional conduct, 
  2. sexual misconduct, 
  3. negligence, 
  4. nonsexual dual relationships, 
  5. conviction of a crime, 
  6. failure to maintain adequate or accurate records, 
  7. failure to comply with continuing education or competency requirements, 
  8. inadequate or improper supervision or delegation, and 
  9. substandard or inadequate care. 

Friday, November 18, 2016

The shame of public shaming

Russell Blackford
The Conversation
Originally published May 6, 2016

Here is an excerpt:

Shaming is on the rise. We’ve shifted – much of the time – to a mode of scrutinising each other for purity. Very often, we punish decent people for small transgressions or for no real transgressions at all. Online shaming, conducted via the blogosphere and our burgeoning array of social networking services, creates an environment of surveillance, fear and conformity.

The making of a call-out culture

I noticed the trend – and began to talk about it – around five years ago. I’d become increasingly aware of cases where people with access to large social media platforms used them to “call out” and publicly vilify individuals who’d done little or nothing wrong. Few onlookers were prepared to support the victims. Instead, many piled on with glee (perhaps to signal their own moral purity; perhaps, in part, for the sheer thrill of the hunt).

Since then, the trend to an online call-out culture has continued and even intensified, but something changed during 2015. Mainstream journalists and public intellectuals finally began to express their unease.

The article is here.

Bayesian Brains without Probabilities

Adam N. Sanborn & Nick Chater
Trends in Cognitive Science
Published Online: October 26, 2016

Bayesian explanations have swept through cognitive science over the past two decades, from intuitive physics and causal learning, to perception, motor control and language. Yet people flounder with even the simplest probability questions. What explains this apparent paradox? How can a supposedly Bayesian brain reason so poorly with probabilities? In this paper, we propose a direct and perhaps unexpected answer: that Bayesian brains need not represent or calculate probabilities at all and are, indeed, poorly adapted to do so. Instead, the brain is a Bayesian sampler. Only with infinite samples does a Bayesian sampler conform to the laws of probability; with finite samples it systematically generates classic probabilistic reasoning errors, including the unpacking effect, base-rate neglect, and the conjunction fallacy.

The article is here.

Thursday, November 17, 2016

Can Psychedelics Make Us More Moral?

Derek Beres
Big Think
Originally published August 22, 2016

Here is an excerpt:

Could a moral drug enhancement instill empathy in such a person? If so, should it be used? Earp is not ignorant of the ethics of such a drug. Looked at from a broader social perspective instead of an individualist mindset is one important factor. If there’s a possibility that a psychopath could harm members of a society, would such a drug be beneficial, especially if the person desires it? What if they don’t?

Psychopathy is a small but very real instance. What about extending this idea of moral neuroenhancement to people with depression? Anger management issues? Excessive anxiety? This does not imply that a person needs a daily dose. Research has shown that psilocybin has an effect even after one episode...

The article is here.

Can Machines Become Moral?

Don Howard
Big Questions Online
Originally published October 23, 2016

Here is an excerpt:

There is an important lesson here, which applies with equal force to the claim that robots cannot comprehend emotion. It is that what can or cannot be done in the domain of artificial intelligence is always an empirical question, the answer to which will have to await the results of further research and development. Confident a priori assertions about what science and engineering cannot achieve have a history of turning out to be wrong, as with Auguste Comte’s bold claim in the 1830s that science could never reveal the internal chemical constitution of the sun and other heavenly bodies, a claim he made at just the time when scientists like Fraunhofer, Foucault, Kirchhoff, and Bunsen were pioneering the use of spectrographic analysis for precisely that task.

The article is here.

Wednesday, November 16, 2016

The Interrogation Decision-Making Model: A General Theoretical Framework for Confessions.

Yang, Yueran; Guyll, Max; Madon, Stephanie
Law and Human Behavior, Oct 20 , 2016.

This article presents a new model of confessions referred to as the interrogation decision-making model. This model provides a theoretical umbrella with which to understand and analyze suspects’ decisions to deny or confess guilt in the context of a custodial interrogation. The model draws upon expected utility theory to propose a mathematical account of the psychological mechanisms that not only underlie suspects’ decisions to deny or confess guilt at any specific point during an interrogation, but also how confession decisions can change over time. Findings from the extant literature pertaining to confessions are considered to demonstrate how the model offers a comprehensive and integrative framework for organizing a range of effects within a limited set of model parameters.

The article is here.

Supervising AI Growth

by Tucker Davey
The Future of Life
Originally posted October 26, 2016

Here is an excerpt:

As Google and other tech companies continue to improve their intelligent machines with each evaluation, the human trainers will fulfill a smaller role. Eventually, Christiano explains, “it’s effectively just one machine evaluating another machine’s behavior.”

Ideally, “each time you build a more powerful machine, it effectively models human values and does what humans would like,” says Christiano. But he worries that these machines may stray from human values as they surpass human intelligence. To put this in human terms: a complex intelligent machine would resemble a large organization of humans. If the organization does tasks that are too complex for any individual human to understand, it may pursue goals that humans wouldn’t like.

In order to address these control issues, Christiano is working on an “end-to-end description of this machine learning process, fleshing out key technical problems that seem most relevant.” His research will help bolster the understanding of how humans can use AI systems to evaluate the behavior of more advanced AI systems. If his work succeeds, it will be a significant step in building trustworthy artificial intelligence.

The article is here.

Tuesday, November 15, 2016

The Inevitable Evolution of Bad Science

Ed Yong
The Atlantic
Originally published September 21, 2016

Here is an excerpt:

In the model, as in real academia, positive results are easier to publish than negative one, and labs that publish more get more prestige, funding, and students. They also pass their practices on. With every generation, one of the oldest labs dies off, while one of the most productive one reproduces, creating an offspring that mimics the research style of the parent. That’s the equivalent of a student from a successful team starting a lab of their own.

Over time, and across many simulations, the virtual labs inexorably slid towards less effort, poorer methods, and almost entirely unreliable results. And here’s the important thing: Unlike the hypothetical researcher I conjured up earlier, none of these simulated scientists are actively trying to cheat. They used no strategy, and they behaved with integrity. And yet, the community naturally slid towards poorer methods. What the model shows is that a world that rewards scientists for publications above all else—a world not unlike this one—naturally selects for weak science.

“The model may even be optimistic,” says Brian Nosek from the Center of Open Science, because it doesn’t account for our unfortunate tendency to justify and defend the status quo. He notes, for example, that studies in the social and biological sciences are, on average, woefully underpowered—they are too small to find reliable results.

The article is here.

Scientists “Switch Off” Self-Control Using Brain Stimulation

By Catherine Caruso
Scientific American
Originally published on October 19, 2016

Imagine you are faced with the classic thought experiment dilemma: You can take a pile of money now or wait and get an even bigger stash of cash later on. Which option do you choose? Your level of self-control, researchers have found, may have to do with a region of the brain that lets us take the perspective of others—including that of our future self.

A study, published today in Science Advances, found that when scientists used noninvasive brain stimulation to disrupt a brain region called the temporoparietal junction (TPJ), people appeared less able to see things from the point of view of their future selves or of another person, and consequently were less likely to share money with others and more inclined to opt for immediate cash instead of waiting for a larger bounty at a later date.

The TPJ, which is located where the temporal and parietal lobes meet, plays an important role in social functioning, particularly in our ability to understand situations from the perspectives of other people. However, according to Alexander Soutschek, an economist at the University of Zurich and lead author on the study, previous research on self-control and delayed gratification has focused instead on the prefrontal brain regions involved in impulse control.

The article is here.

Monday, November 14, 2016

Walter Sinnott-Armstrong discusses artificial intelligence and morality

By Joyce Er
Duke Chronicle
Originally published October 25, 2016

How do we create artificial intelligence that serves mankind’s purposes? Walter Sinnott-Armstrong, Chauncey Stillman professor of practical ethics, led a discussion Monday on the subject.

Through an open discussion funded by the Future of Life Institute, Sinnott-Armstrong raised issues at the intersection of computer science and ethical philosophy. Among the tricky questions Sinnott-Armstrong tackled were programming artificial intelligence so that it would not eliminate the human race as well as the legal and moral issues involving self-driving cars.

Sinnott-Armstrong noted that artificial intelligence and morality are not as irreconcilable as some might believe, despite one being regarded as highly structured and the other seen as highly subjective. He highlighted various uses for artificial intelligence in resolving moral conflicts, such as improving criminal justice and locating terrorists.

The article is here.

A Bright Robot Future Awaits, Once This Downer Election Is Over

By Andrew Mayeda
Originally published October 24, 2016

Here is an excerpt:

‘Singularity Is Near’

An hour’s drive away, in San Francisco, the influx of tech workers has helped push the median single-family home price to $1.26 million. Private buses carry them to jobs at Apple Inc., Alphabet Inc.’s Google, or Facebook. Meanwhile, one former mayor has proposed using a decommissioned aircraft carrier to house the city’s homeless, who throng the sidewalks along Market Street, home to Uber and Twitter Inc.

How much will the “second machine age” deepen such divisions? Last month, a trio of International Monetary Fund economists came up with some chilling answers. Even if humans retain their creative edge over robots, they found, it will likely take two decades before productivity gains outweigh the downward pressure on wages from automation; meanwhile, “inequality will be worse, possibly dramatically so.”

And if the robots become perfect substitutes, the paper envisages an extreme scenario in which labor becomes wholly redundant as “capital takes over the entire economy.” The IMF economists even invoke futurist Ray Kurzweil’s 2006 bestseller, “The Singularity Is Near.”

Silicon Valley executives say alarm bells have been ringing for decades about job-killing technology, and they’re usually false alarms.

The article is here.

Sunday, November 13, 2016

The VSED Exit: A Way to Speed Up Dying, Without Asking Permission

by Paula Span
The New York Times
Originally published October 21, 2016

Here is an excerpt:

In end-of-life circles, this option is called VSED (usually pronounced VEEsed), for voluntarily stopping eating and drinking. It causes death by dehydration, usually within seven to 14 days. To people with serious illnesses who want to hasten their deaths, a small but determined group, VSED can sound like a reasonable exit strategy.

Unlike aid with dying, now legal in five states, it doesn't require governmental action or physicians' authorization. Patients don't need a terminal diagnosis, and they don't have to prove mental capacity. They do need resolve.

"It's for strong-willed, independent people with very supportive families," said Dr. Timothy Quill, a veteran palliative care physician at the University of Rochester Medical Center.

He was speaking at a conference on VSED, billed as the nation's first, at Seattle University School of Law this month. It drew about 220 participants -- physicians and nurses, lawyers, bioethicists, academics of various stripes, theologians, hospice staff. (Disclosure: I was also a speaker, and received an honorarium and some travel costs.)

What the gathering made clear was that much about VSED remains unclear.

Is it legal?

For a mentally competent patient, able to grasp and communicate decisions, probably so, said Thaddeus Pope, director of the Health Law Institute at Mitchell Hamline School of Law in St. Paul, Minn. His research has found no laws expressly prohibiting competent people from VSED, and the right to refuse medical and health care intervention is well established.

The article is here.

Saturday, November 12, 2016

Why Suicide Keeps Rising for Middle-Aged Men

By Lisa Esposito
US News and World Report
Originally published Oct. 19, 2016

Suicide rates in the U.S. continue to rise, and working-age adults – particularly men – make up the largest increase, according to the Centers for Disease Control and Prevention. Middle-aged men in the 45 to 60 range experienced a 43 percent increase in suicide deaths from 1997 to 2014, and the rise has been even sharper since 2005. Untreated mental illness, the Great Recession, work-related issues and men's reluctance to reach out for help converge to put them at greater risk for taking their own lives. And because men are more likely than women to use a gun, their suicide attempts are more often fatal.

Historically, suicide rates have always been higher for men, says Dr. Alex Crosby, surveillance branch chief in the CDC's Division of Violence Prevention. "But what we've seen in these past few years is rates have been going up among males and females," he told journalists attending a National Press Foundation conference in September. "Still, rates are higher among males – about four times higher." For suicide attempts that don't prove fatal, the balance changes, with two to three times more females than males trying to take their own lives.

"In about half of the suicides in the United States, the mechanism or the method was a firearm," Crosby says. Males are more likely to use firearms, while poison is more common for females. However, he notes, "When you look at suicide in the military, females choose firearms almost as much as men."

The article is here.

Moral Dilemmas and Guilt

Patricia S. Greenspan
Philosophical Studies: An International Journal for Philosophy in the Analytic Tradition
Vol. 43, No. 1 (Jan., 1983), pp. 117-125

In 'Moral dilemmas and ethical consistency', Ruth Marcus argues that moral dilemmas are 'real': there are cases where an agent ought to perform each of two incompatible actions.  Thus, a doctor with two patients equally in need of his attention ought to save each, even though he cannot save both. By
claiming that his dilemma is real, I take Marcus to be denying (rightly) that it is merely epistemic - a matter of uncertainty as to which patient to save.  Rather, she wants to say, the moral code yields two opposing recommendations, both telling him what he ought to do. The code is not inconsistent,
however, as long as its rules are all obeyable in some possible world; and it is not deficient as a guide to action, as long as it contains a second order principle, directing an agent to avoid situations of conflict. Where a dilemma does arise, though, the agent is guilty no matter what he does.

This last point seems implausible for the doctor's case; but here I shall consider a case which does fit Marcus's comments on guilt - if not all her views on the nature of moral dilemma.  I think that she errs, first of all, in counting as a dilemma any case where there are some considerations favoring each of two incompatible actions, even if it is clear that one of them is right. For instance, in the case of withholding weapons from someone who has gone mad, it would be unreasonable for the agent to feel guilty about breaking his promise, since he has done exactly as he should. But secondly, even in
Marcus's 'strong' cases, I do not think that dilemmas must be taken as yielding opposing all-things-considered ought-judgments, viewed as recommendations for action, rather than stopping with judgments of obligation, or reports of commitments. The latter do not imply 'can' (in the sense of physical possibility); and where they are jointly unsatisfiable, and supported by reasons of equal weight, I think we should say that the moral code yields no particular recommendations, rather than two which conflict.

The article is here.

Friday, November 11, 2016

The map is not the territory: medical records and 21st century practice

Stephen A Martin & Christine A Sinsky
The Lancet
Published: 25 April 2016


Documentation of care is at risk of overtaking the delivery of care in terms of time, clinician focus, and perceived importance. The medical record as currently used for documentation contributes to increased cognitive workload, strained clinician–patient relationships, and burnout. We posit that a near verbatim transcript of the clinical encounter is neither feasible nor desirable, and that attempts to produce this exact recording are harmful to patients, clinicians, and the health system. In this Viewpoint, we focus on the alternative constructions of the medical record to bring them back to their primary purpose—to aid cognition, communicate, create a succinct account of care, and support longitudinal comprehensive care—thereby to support the building of relationships and medical decision making while decreasing workload.

Here are two excerpts:

While our vantage point is American, documentation guidelines are part of a global tapestry of what has been termed technogovernance, a bureaucratic model in which professionals' behaviour is shaped and manipulated by tight regulatory policies.


In 1931, the scientist Alfred Korzybski introduced the phrase "the map is not the territory", to suggest that the representation of reality is not reality itself. In health care, creating the map (ie, the clinical record) can take on more importance and consume more resources than providing care itself. Indeed, more time may be spent documenting care than delivering care. In addition, fee-for-service payment arrangements pay for the map (the medical note), not the territory (the actual care). Readers of contemporary electronic notes, composed generously of auto-text output, copy forward text, and boiler plate statements for compliance, billing, and performance measurement understand all too well the gap between the map and the territory, and more profoundly, between what is done to patients in service of creating the map and what patients actually need.

Contemporary medical records are used for purposes that extend beyond supporting patient and caregiver. Records are used in quality evaluations, practitioner monitoring, practice certifications, billing justification, audit defence, disability determinations, health insurance risk assessments, legal actions, and research.

Psychiatric patients wait the longest in emergency rooms

By Amy Ellis Nutt
The Washington Post
Originally published October 18, 2016

Here is an excerpt:

Many studies over the past decade have shown that ER overcrowding results in higher mortality rates of ER patients, higher costs and higher stress levels for medical professionals.

That overcrowding won’t end anytime soon, Parker said, unless access to outpatient treatment centers expands. But in the latest survey, more than half of the ER physicians said mental health resources in their communities had declined in the past year.

The paradox at the heart of the problem is almost beyond comprehension, in Lippert’s view.

“Nowhere else in medicine,” she said, “do we have our most severely ill patients staying the longest.”

The article is here.

Thursday, November 10, 2016

Has capitalism has turned us into narcissists?

Terry Eagleton
The Guardian
Originally published August 3, 2016

Here is an excerpt:

In our own time, the concept of happiness has moved from the private sphere to the public one. As William Davies reports in this fascinating study, a growing number of corporations employ chief happiness officers, while Google has a “jolly good fellow” to keep the company’s spirits up. Maybe the Bank of England should consider hiring a jester. Specialist happiness consultants advise those who have been forcibly displaced from their homes on how to move on emotionally. Two years ago, British Airways trialled a “happiness blanket”, which turns from red to blue as the passenger becomes more relaxed so that your level of contentment is visible to the flight attendants. A new drug, Wellbutrin, promises to alleviate major depressive symptoms occurring after the loss of a loved one. It is supposed to work so effectively that the American Psychiatric Association has ruled that to be unhappy for more than two weeks after the death of another human being can be considered a mental illness. Bereavement is a risk to one’s psychological wellbeing.

It is no wonder that the notion of happiness has been taken into public ownership, given the remarkable spread of spiritual malaise around the globe. Around a third of American adults and close to half in Britain believe that they are sometimes depressed. Even so, more than half a century after the discovery of antidepressants, nobody really knows how they function.

The article is here.

The Ethics of Algorithms: Mapping the Debate

Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S. and Floridi, L. 2016 (in press). ‘The Ethics of Algorithms: Mapping the Debate’. Big Data & Society


In information societies, operations, decisions and choices previously left to humans are increasingly delegated to algorithms, which may advise, if not decide, about how data should be interpreted and what actions should be taken as a result. More and more often, algorithms mediate social processes, business transactions, governmental decisions, and how we perceive, understand, and interact among ourselves and with the environment. Gaps between the design and operation of algorithms and our understanding of their ethical implications can have severe consequences affecting individuals as well as groups and whole societies. This paper makes three contributions to clarify the ethical importance of algorithmic mediation. It provides a prescriptive map to organise the debate. It reviews the current discussion of ethical aspects of algorithms.And it assesses the available literature in order to identify areas requiring further work to develop the ethics of algorithms.

The book chapter is here.

Wednesday, November 9, 2016

Would Sex with a Robot Be Infidelity?

By Brandon Ambrosino
Originally posted 20 October 2016

Here is an excerpt:

No doubt Westworld will continue exploring questions most of us haven’t yet thought of. But we shouldn’t pretend these questions only belong to the domains of technologists and futurists. As social psychologist Sherry Turkle, who investigates our relationship with technology, has pointed out, our conversations about the future shouldn’t obsess over what robots will be like. Instead, she says, we should think what kind of people we will be, what kind of people we are becoming, every day, whether we’re watching porn, making love to our partners, trying to outsmart Siri or killing an avatar for no other reason than that’s what happens in a video game.

The article is here.

Report: More than half of mentally ill U.S. adults get no treatment

By Amy Ellis Nutt
The Washington Post
Originally published October 19, 2016

Mental Health America just released its annual assessment of Americans with mental illness, the treatment they receive and the resources available to them — and the conclusions are sobering: Twenty percent of adults (43.7 million people) have a mental health condition, and more than half of them do not receive treatment. Among youth, the rates of depression are rising, but 80 percent of children and adolescents get either insufficient treatment or none at all.

“Once again, our report shows that too many Americans are suffering and far too many are not receiving the treatment they need to live healthy and productive lives,” Paul Gionfriddo, president of Mental Health America, said in a statement. “We must improve access to care and treatments, and we need to put a premium on early identification and early intervention for everyone with mental health concerns.”

The article is here.

Tuesday, November 8, 2016

The Illusion of Moral Superiority

Ben M. Tappin and Ryan T. McKay
Social Psychological and Personality Science
2016, 1-9


Most people strongly believe they are just, virtuous, and moral; yet regard the average person as distinctly less so. This invites accusations of irrationality in moral judgment and perception—but direct evidence of irrationality is absent. Here, we quantify this irrationality and compare it against the irrationality in other domains of positive self-evaluation. Participants (N ¼ 270) judged themselves and the average person on traits reflecting the core dimensions of social perception: morality, agency, and sociability.  Adapting new methods, we reveal that virtually all individuals irrationally inflated their moral qualities, and the absolute and relative magnitude of this irrationality was greater than that in the other domains of positive self-evaluation. Inconsistent with prevailing theories of overly positive self-belief, irrational moral superiority was not associated with self-esteem. Taken together, these findings suggest that moral superiority is a uniquely strong and prevalent form of ‘‘positive illusion,’’ but the underlying function remains unknown.

The article is here.

Academic Research in the 21st Century: Maintaining Scientific Integrity in a Climate of Perverse Incentives and Hypercompetition

Marc A. Edwards and Siddhartha Roy
Environmental Engineering Science. September 2016


Over the last 50 years, we argue that incentives for academic scientists have become increasingly perverse in terms of competition for research funding, development of quantitative metrics to measure performance, and a changing business model for higher education itself. Furthermore, decreased discretionary funding at the federal and state level is creating a hypercompetitive environment between government agencies (e.g., EPA, NIH, CDC), for scientists in these agencies, and for academics seeking funding from all sources—the combination of perverse incentives and decreased funding increases pressures that can lead to unethical behavior. If a critical mass of scientists become untrustworthy, a tipping point is possible in which the scientific enterprise itself becomes inherently corrupt and public trust is lost, risking a new dark age with devastating consequences to humanity. Academia and federal agencies should better support science as a public good, and incentivize altruistic and ethical outcomes, while de-emphasizing output.

The article is here.

Monday, November 7, 2016

Assisted-Suicide Fight Moves to Colorado

Dan French
The Wall Street Journal
Originally posted October 16, 2016

The latest front in the battle over doctor-assisted suicide is unfolding in Colorado, where voters will consider a ballot measure next month that would permit physicians to aid terminally ill patients in dying.

Proposition 106 would allow adults who have six months or less to live, and are mentally competent, to take medication prescribed by a doctor to end their lives.

If it passes, Colorado would be the fifth state to have a law that allows the practice, according to the National Conference of State Legislatures.

Oregon—which is the model for Colorado’s proposal— along with Vermont and Washington have enacted similar measures. California’s law permitting doctor-assisted suicide took effect in June after it passed the state legislature last year.

In a sixth state, Montana, the state supreme court ruled that doctors who provide “aid in dying” are allowed to use a terminally ill patient’s consent as a defense in court if they are charged with homicide.

The article is here.

There’s No Such Thing as Free Will

By Steve Cave
The Atlantic
June 2016 Issue

Here is an excerpt:

This research and its implications are not new. What is new, though, is the spread of free-will skepticism beyond the laboratories and into the mainstream. The number of court cases, for example, that use evidence from neuroscience has more than doubled in the past decade—mostly in the context of defendants arguing that their brain made them do it. And many people are absorbing this message in other contexts, too, at least judging by the number of books and articles purporting to explain “your brain on” everything from music to magic. Determinism, to one degree or another, is gaining popular currency. The skeptics are in ascendance.

This development raises uncomfortable—and increasingly nontheoretical—questions: If moral responsibility depends on faith in our own agency, then as belief in determinism spreads, will we become morally irresponsible? And if we increasingly see belief in free will as a delusion, what will happen to all those institutions that are based on it?

The article is here.

Sunday, November 6, 2016

The Psychology of Disproportionate Punishment

Daniel Yudkin
Scientific American
Originally published October 18, 2016

Here is an excerpt:

These studies suggest that certain features of the human mind are prone to “intergroup bias” in punishment. While our slow, thoughtful deliberative side may desire to maintain strong standards of fairness and equality, our more basic, reflexive side may be prone to hostility and aggression to anyone deemed an outsider.

Indeed, this is consistent with what we know about the evolutionary heritage of our species, which spent thousands of years in tightly knit tribal groups competing for scarce resources on the African savannah. Intergroup bias may be tightly woven up in the fabric of everyone’s DNA, ready to emerge under conditions of hurry or stress.

But the picture of human relationships is not all bleak. Indeed, another line of research in which I am involved, led by Avital Mentovich, sheds light on the ways we might transcend the biases that lurk beneath the surface of the psyche.

The article is here.

Saturday, November 5, 2016

Structural Racism and Supporting Black Lives — The Role of Health Professionals

Rachel R. Hardeman, Eduardo M. Medina, and Katy B. Kozhimannil
The New England Journal of Medicine
Originally posted October 12, 2016

Here is an excerpt:

Structural racism, the systems-level factors related to, yet distinct from, interpersonal racism, leads to increased rates of premature death and reduced levels of overall health and well-being. Like other epidemics, structural racism is causing widespread suffering, not only for black people and other communities of color but for our society as a whole. It is a threat to the physical, emotional, and social well-being of every person in a society that allocates privilege on the basis of race.  We believe that as clinicians and researchers, we wield power, privilege, and responsibility for dismantling structural racism — and we have a few recommendations for clinicians and researchers who wish to do so.

First, learn about, understand, and accept the United States’ racist roots. Structural racism is born of a doctrine of white supremacy that was developed to justify mass oppression involving economic and political exploitation.3 In the United States, such oppression was carried out through centuries of slavery premised on the social construct of race.

Our historical notions about race have shaped our scientific research and clinical practice. For example, experimentation on black communities and the segregation of care on the basis of race are deeply embedded in the U.S. health care system.

The article is here.

Friday, November 4, 2016

Why Should We All Be Cultural Psychologists? Lessons From the Study of Social Cognition.

Qi Wang
Perspectives on Psychological Science September 2016 vol. 11 no. 5 583-596


I call the attention of psychologists to the pivotal role of cultural psychology in extending and enriching research programs. I argue that it is not enough to simply acknowledge the importance of culture and urge psychologists to practice cultural psychology in their research. I deconstruct five assumptions about cultural psychology that seriously undermine its contribution to the building of a true psychological science, including that cultural psychology (a) is only about finding group differences, (b) does not appertain to group similarities, (c) concerns only group-level analysis, (d) is irrelevant to basic psychological processes, and (e) is used only to confirm the generalizability of theories. I discuss how cultural psychology can provide unique insights into psychological processes and further equip researchers with additional tools to understand human behavior. Drawing lessons from the 20 years of cultural research that my colleagues and I have done on the development of social cognition, including autobiographical memory, future thinking, self, and emotion knowledge, I demonstrate that incorporating cultural psychology into research programs is not only necessary but also feasible.

Here is an excerpt:

Although those who truly believe that culture does not matter may be rare in the face of mounting theoretical insights and empirical findings to the contrary, there are those who choose not to care about culture because of the fear to venture into the unknown or the desire to maintain status quo. The hope rests on the researchers like my colleague in the second story, who are curious about culture and yet unsure of how to make it matter for their research. They sense the urgency when facing an increasingly diverse world around them and when working with an increasingly diverse participant pool. For those researchers, the important question is how to incorporate culture into research so that they are not continuing to ignore the cultural backgrounds of their participants--taking an attitude of "don't ask, don't tell"--or to control for the variation in analysis as if it imposes "noise."

The article is here.

Fostering Collective Growth and Vitality Following Acts of Moral Courage

Sheldene Simola
Journal of Business Ethics


The purpose of this article is to explore a critical paradox related to the expression of moral courage in organizations, which is that although morally courageous acts are aimed at fostering collective growth, vitality, and virtue, their initial result is typically one of collective unease, preoccupation, or lapse, reflected in the social ostracism and censure of the courageous member and message. Therefore, this article addresses the questions of why many organizational groups suffer stagnation or decline rather than growth and vitality following acts of moral courage, and what can be done to ameliorate this outcome. A general system, relational psychodynamic perspective through which organizational group members might receive and respond to acts of moral courage is offered, and seven insights emerging from this perspective for fostering collective growth and vitality following acts of moral courage are provided.

The article is here.

Thursday, November 3, 2016

Why It's So Hard to Get Mental Healthcare in Rural America

By Syrena Clark
Vice News
October 7, 2016

Here is an excerpt:

Conditions in rural areas can also exacerbate mental-health problems. One in five adults suffers from mental illness, but in rural areas, rates of depression and suicide attempts are significantly higher than in urban areas, according to a report by the Center for Rural Affairs. Mostly because of isolation and poverty. For people who can't afford or access mental healthcare, some turn to self-medication, treating symptoms with drugs, alcohol, and self-harm, worsening their own illnesses. Where I live, it's easier to buy Klonopin from a dealer than it is from a psychiatrist.

After years of inadequate treatment, I swallowed an entire bottle of Gabapentin, a type of seizure medication. My goal was to die. When I was later strapped into an ambulance, the drive to the hospital was over an hour. I got better there, but after six days, I was discharged. It was far too soon, but there simply weren't enough beds to stay.

Mackie said his organization and others are investing in programs that will bring more attention to mental healthcare in rural areas, including programs that "[educate] people in rural areas to be able to provide assistance and care at a basic level," so as to start a pipeline of people who can later become licensed mental-health professionals.

The article is here.

In the World of A.I. Ethics, the Answers Are Murky

Mike Brown
Originally posted October 12, 2016

Here is an excerpt:

“We’re not issuing a formal code of ethics. No hard-coded rules are really possible,” Raja Chatila, chair of the initiative’s executive committee, tells Inverse. “The final aim is to ensure every technologist is educated, trained, and empowered to prioritize ethical considerations in the design and development of autonomous and intelligent systems.”

It all sounds lovely, but surely a lot of this is ignoring cross-cultural differences. What if, culturally, you hold different values about how your money app should manage your checking account? A 2014 YouGov poll found that 63 percent of British citizens believed that, morally, people have a duty to contribute money to public services through taxation. In the United States, that figure was just 37 percent, with a majority instead responding that there was a stronger moral argument that people have a right to the money they earn. Is it even possible to come up with a single, universal code of ethics that could translate across cultures for advanced A.I.?

The article is here.

Wednesday, November 2, 2016

Hard Time or Hospital Treatment? Mental Illness and the Criminal Justice System

Christine Montross
N Engl J Med 2016; 375:1407-1409
October 13, 2016

Here is an excerpt:

When law enforcement is involved, the trajectory of my patients’ lives veers sharply. The consequences are unpredictable and range from stability and safety to unmitigated disaster. When patients are ill or afraid enough to be potentially assaultive, the earliest decision as to whether they belong in jail or in the hospital may shape the course of the next many years of their lives.

It’s now well understood that the closing of state hospitals in the 1970s and 1980s led to the containment of mentally ill people in correctional facilities. Today our jails and state prisons contain an estimated 356,000 inmates with serious mental illness, while only about 35,000 people with serious mental illness are being treated in state hospitals — stark evidence of the decimation of the public mental health system.

When a mentally ill person comes into contact with the criminal justice system, the decision about whether that person belongs in jail or in the hospital is rarely a clinical one. Instead, it’s made by the gatekeepers of the legal system: police officers, prosecutors, and judges. The poor, members of minority groups, and people with a history of law-enforcement involvement are shuttled into the correctional system in disproportionate numbers; they are more likely to be arrested and less likely than their more privileged counterparts to be adequately treated for their psychiatric illnesses.

The article is here.

A Day in the Life of the Brain by Susan Greenfield: Consciousness

Steven Rose
The Guardian
Originally posted October 12, 2016

Here is an excerpt:

Neuroscientists are rarely trained in philosophy, but a little modesty might not go amiss. Some committed reductionists among them maintain that consciousness is merely a “user illusion” – that you may think you are making conscious decisions but in “reality” all the hard work is being done by the interactions of nerve cells within the brain. Most, however, are haunted by what their philosophical sympathisers call the “hard problem” of the relationship between objective measures – say of light of a particular wavelength – and qualia, the subjective experience of seeing red.

Within their restricted definition there are two potentially productive questions that neuroscientists can ask about consciousness: first, how and when it emerged along the evolutionary path that led to humans? And second, what and where in the brain are the structures and processes that enable conscious experience? The evolutionary question has been discussed extensively by the neurologist Antonio Damasio, who has mapped the transitions between reflex responses to external stimuli in primitive animals through awareness to fully developed self-consciousness, on to the emergence of increasingly complex, large brains.

Greenfield is concerned with the second question, the identification of the neural correlates of consciousness.

The article is here.

Tuesday, November 1, 2016

How U.S. Torture Left a Legacy of Damaged Minds

by Matt Apuzzo, Sheri Fink, and James Risen
The New York Times
Originally published October 10, 2016

Before the United States permitted a terrifying way of interrogating prisoners, government lawyers and intelligence officials assured themselves of one crucial outcome. They knew that the methods inflicted on terrorism suspects would be painful, shocking and far beyond what the country had ever accepted. But none of it, they concluded, would cause long lasting psychological harm.

Fifteen years later, it is clear they were wrong.

Today in Slovakia, Hussein al-Marfadi describes permanent headaches and disturbed sleep, plagued by memories of dogs inside a blackened jail. In Kazakhstan, Lutfi bin Ali is haunted by nightmares of suffocating at the bottom of a well. In Libya, the radio from a passing car spurs rage in Majid Mokhtar Sasy al-Maghrebi, reminding him of the C.I.A. prison where earsplitting music was just one assault to his senses.

And then there is the despair of men who say they are no longer themselves. "I am living this kind of depression," said Younous Chekkouri, a Moroccan, who fears going outside because he sees faces in crowds as Guantanamo Bay guards. "I'm not normal anymore."

The article is here.

The problem with p-values

David Colquhoun
Originally published October 11, 2016

Here is an excerpt:

What matters to a scientific observer is how often you’ll be wrong if you claim that an effect is real, rather than being merely random. That’s a question of induction, so it’s hard. In the early 20th century, it became the custom to avoid induction, by changing the question into one that used only deductive reasoning. In the 1920s, the statistician Ronald Fisher did this by advocating tests of statistical significance. These are wholly deductive and so sidestep the philosophical problems of induction.

Tests of statistical significance proceed by calculating the probability of making our observations (or the more extreme ones) if there were no real effect. This isn’t an assertion that there is no real effect, but rather a calculation of what would be expected if there were no real effect. The postulate that there is no real effect is called the null hypothesis, and the probability is called the p-value. Clearly the smaller the p-value, the less plausible the null hypothesis, so the more likely it is that there is, in fact, a real effect. All you have to do is to decide how small the p-value must be before you declare that you’ve made a discovery. But that turns out to be very difficult.