Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Criminal Justice System. Show all posts
Showing posts with label Criminal Justice System. Show all posts

Tuesday, January 16, 2024

Criminal Justice Reform Is Health Care Reform

Haber LA, Boudin C, Williams BA.
JAMA.
Published online December 14, 2023.

Here is an excerpt:

Health Care While Incarcerated

Federal law mandates provision of health care for incarcerated persons. In 1976, the US Supreme Court ruled in Estelle v Gamble that “deliberate indifference to serious medical needs of prisoners constitutes the ‘unnecessary and wanton infliction of pain,’” prohibited under the Eighth Amendment. Subsequent cases established that incarcerated individuals must receive access to medical care, enactment of ordered care, and treatment without bias to their incarcerated status.

Such court decisions establish rights and responsibilities, but do not fund or oversee health care delivery. Community health care oversight, such as the Joint Commission, does not apply to prison health care. When access to quality care is inadequate, incarcerated patients must resort to lawsuits to advocate for change—a right curtailed by the Prison Litigation Reform Act of 1996, which limited prisoners’ ability to file suit in federal court.

Despite Eighth Amendment guarantees, simply entering the criminal-legal system carries profound personal health risks: violent living conditions result in traumatic injuries, housing in congregate settings predisposes to the spread of infectious diseases, and exceptions to physical comfort, health privacy, and informed decision-making occur during medical care delivery. These factors compound existing health disparities commonly found in the incarcerated population.

The First Step Act

Signed under then-president Trump, the First Step Act of 2018 (FSA) was a bipartisan criminal justice reform bill designed to reduce the federal prison population while also protecting public safety. The legislation aimed to decrease entry into prison, provide rehabilitation during incarceration, improve protections for medically vulnerable individuals, and expedite release.

To achieve these goals, the FSA included prospective and retroactive sentencing reforms, most notably expanded relief from mandatory minimum sentences for drug distribution offenses that disproportionately affect Black individuals in the US. The FSA additionally called for the use of evidence-based tools, such as the Prisoner Assessment Tool Targeting Estimated Risk and Needs, to facilitate release decisions.

The legislation also addressed medical scenarios commonly encountered by professionals providing care to incarcerated persons, including prohibitions on shackling pregnant patients, deescalation training for correctional officers when encountering people with psychiatric illness or cognitive deficits, easing access to compassionate release for those with advanced age or life-limiting illness, and mandatory reporting on the use of medication-assisted treatment for opioid use disorder. With opioid overdose being the leading cause of postrelease mortality, the latter requirement has been particularly important for those transitioning out of correctional settings.

During the recent COVID-19 pandemic, FSA amendments expanding incarcerated individuals’ access to the courts led to a marked increase in successful petitions for early release from prison. Decarcerating those individuals most medically at risk during the public health crisis reduced the spread of viral illness associated with prison overcrowding, protecting both incarcerated individuals and those working in carceral settings.

Monday, April 3, 2023

The Mercy Workers

Melanie Garcia
The Marshall Project
Originally published 2 March 2023

Here are two excerpts:

Like her more famous anti-death penalty peers, such as Bryan Stevenson and Sister Helen Prejean, Baldwin argues the idea that people should be judged on more than their worst actions. But she also speaks in more spiritual terms about the value of unearthing her clients’ lives. “We look through a more merciful lens,” she told me, describing her role as that of a “witness who knows and understands, without condemning.” This work, she believes, can have a healing effect on the client, the people they hurt, and even society as a whole. “The horrible thing to see is the crime,” she said. “We’re saying, ‘Please, please, look past that, there’s a person here, and there’s more to it than you think.’”

The United States has inherited competing impulses: It’s “an eye for an eye,” but also “blessed are the merciful.” Some Americans believe that our criminal justice system — rife with excessively long sentences, appalling prison conditions and racial disparities — fails to make us safer. And yet, tell the story of a violent crime and a punishment that sounds insufficient, and you’re guaranteed to get eyerolls.

In the midst of that impasse, I’ve come to see mitigation specialists like Baldwin as ambassadors from a future where we think more richly about violence. For the last few decades, they have documented the traumas, policy failures, family dynamics and individual choices that shape the lives of people who kill. Leaders in the field say it’s impossible to accurately count mitigation specialists — there is no formal license — but there may be fewer than 1,000. They’ve actively avoided media attention, and yet the stories they uncover occasionally emerge in Hollywood scripts and Supreme Court opinions. Over three decades, mitigation specialists have helped drive down death sentences from more than 300 annually in the mid-1990s to fewer than 30 in recent years.

(cut)

The term “mitigation specialist” is often credited to Scharlette Holdman, a brash Southern human rights activist famous for her personal devotion to her clients. The so-called Unabomber, Ted Kaczynski, tried to deed his cabin to her. (The federal government stopped him.) Her last client was accused 9/11 plotter Khalid Shaikh Mohammad. While working his case, Holdman converted to Islam and made a pilgrimage to Mecca. She died in 2017 and had a Muslim burial.

Holdman began a crusade to stop executions in Florida in the 1970s, during a unique moment of American ambivalence towards the punishment. After two centuries of hangings, firing squads and electrocutions, the Supreme Court struck down the death penalty in 1972. The court found that there was no logic guiding which prisoners were executed and which were spared.

The justices eventually let executions resume, but declared, in the 1976 case of Woodson v. North Carolina, that jurors must be able to look at prisoners as individuals and consider “compassionate or mitigating factors stemming from the diverse frailties of humankind.”

Friday, May 6, 2022

Interventions to reduce suicidal thoughts and behaviours among people in contact with the criminal justice system

A. Carter, A. Butler, et al. (2022)
The Lancet, Vol 44, 101266

Summary

Background

People who experience incarceration die by suicide at a higher rate than those who have no prior criminal justice system contact, but little is known about the effectiveness of interventions in other criminal justice settings. We aimed to synthesise evidence regarding the effectiveness of interventions to reduce suicide and suicide-related behaviours among people in contact with the criminal justice system.

Findings

Thirty-eight studies (36 primary research articles, two grey literature reports) met our inclusion criteria, 23 of which were conducted in adult custodial settings in high-income, Western countries. Four studies were randomised controlled trials. Two-thirds of studies (n=26, 68%) were assessed as medium quality, 11 (29%) were assessed as high quality, and one (3%) was assessed as low quality. Most had considerable methodological limitations and very few interventions had been rigorously evaluated; as such, drawing robust conclusions about the efficacy of interventions was difficult.

Research in context

Evidence before this study

One previous review had synthesised the literature regarding the effectiveness of interventions during incarceration, but no studies had investigated the effectiveness of interventions to prevent suicidal thoughts and/or behaviours among people in contact with the multiple other settings in the criminal justice system. We searched Embase, PsycINFO, and MEDLINE on 1 June 2021 using variants and combinations of search terms relating to suicide, self-harm, prevention, and criminal justice system involvement (suicide, self-injury, ideation, intervention, trial, prison, probation, criminal justice).
 Added value of this study

Our review identified gaps in the evidence base, including a dearth of robust evidence regarding the effectiveness of interventions across non-custodial criminal justice settings and from low- and middle-income countries. We identified the need for studies examining suicide prevention initiatives for people who were detained in police custody, on bail, or on parole/license, those serving non-custodial sentences, and those after release from incarceration. Furthermore, our findings suggested an absence of interventions which considered specific population groups with diverse needs, such as women, First Nations people, and young people.

Friday, April 1, 2022

Implementing The 988 Hotline: A Critical Window To Decriminalize Mental Health

P. Krass, E. Dalton, M. Candon, S. Doupnik
Health Affairs
Originally posted 25 FEB 22

Here is an excerpt:

Decriminalization Of Mental Health

The 988 hotline holds incredible promise toward decriminalizing the response to mental health emergencies. Currently, if an individual is experiencing a mental health crisis, they, their caregivers, and bystanders have few options beyond calling 911. As a result, roughly one in 10 individuals with mental health disorders have interacted with law enforcement prior to receiving psychiatric care, and 10 percent of police calls are for mental health emergencies. When police arrive, if they determine an acute safety risk, they transport the individual in crisis for further psychiatric assessment, most commonly at a medical emergency department. This almost always takes place in a police vehicle, many times in handcuffs, a scenario that contradicts central tenets of trauma-informed mental health care. In the worst-case scenario, confrontation with police results in injury or death. Adverse outcomes during response to mental health emergencies are more than 10-fold more likely for individuals with mental health conditions than for individuals without, and are disproportionately experienced by people of color. This consequence was tragically highlighted by the death of Walter Wallace, Jr., who was killed by police while experiencing a mental health emergency in October 2021.

Ideally, the new 988 number would activate an entirely different cascade of events. An individual in crisis, their family member, or even a bystander will be able to immediately reach a trained crisis counselor who can provide phone-based triage, support, and local resources. If needed, the counselor can activate a mobile mental health crisis team that will arrive on site to de-escalate; provide brief therapeutic interventions; either refer for close outpatient follow up or transport the individual for further psychiatric evaluation; and even offer food, drink, and hygiene supplies.
 
Rather than forcing families to call 911 for any type of help—regardless of criminal activity—the 988 line will allow individuals to access mental health crisis support without involving law enforcement. This approach can empower families to self-advocate for the right level of mental health care—including avoiding unnecessary medical emergency department visits, which are not typically designed to handle mental health crises and can further traumatize individuals and their families—and to initiate psychiatric assessment and treatment sooner. 911 dispatchers will also be able to re-route calls to 988 when appropriate, allowing law enforcement personnel to spend more time on their primary role of ensuring public safety. Finally, the 988 number will help offer a middle option for individuals who need rapid linkage to care, including rapid psychiatric evaluation and initiation of treatment, but do not yet meet criteria for crisis. This is a crucial service given current difficulties in accessing timely, in-network outpatient mental health care.

Sunday, July 25, 2021

Should we be concerned that the decisions of AIs are inscrutable?

John Zerilli
Psyche.co
Originally published 14 June 21

Here is an excerpt:

However, there’s a danger of carrying reliabilist thinking too far. Compare a simple digital calculator with an instrument designed to assess the risk that someone convicted of a crime will fall back into criminal behaviour (‘recidivism risk’ tools are being used all over the United States right now to help officials determine bail, sentencing and parole outcomes). The calculator’s outputs are so dependable that an explanation of them seems superfluous – even for the first-time homebuyer whose mortgage repayments are determined by it. One might take issue with other aspects of the process – the fairness of the loan terms, the intrusiveness of the credit rating agency – but you wouldn’t ordinarily question the engineering of the calculator itself.

That’s utterly unlike the recidivism risk tool. When it labels a prisoner as ‘high risk’, neither the prisoner nor the parole board can be truly satisfied until they have some grasp of the factors that led to it, and the relative weights of each factor. Why? Because the assessment is such that any answer will necessarily be imprecise. It involves the calculation of probabilities on the basis of limited and potentially poor-quality information whose very selection is value-laden.

But what if systems such as the recidivism tool were in fact more like the calculator? For argument’s sake, imagine a recidivism risk-assessment tool that was basically infallible, a kind of Casio-cum-Oracle-of-Delphi. Would we still expect it to ‘show its working’?

This requires us to think more deeply about what it means for an automated decision system to be ‘reliable’. It’s natural to think that such a system would make the ‘right’ recommendations, most of the time. But what if there were no such thing as a right recommendation? What if all we could hope for were only a right way of arriving at a recommendation – a right way of approaching a given set of circumstances? This is a familiar situation in law, politics and ethics. Here, competing values and ethical frameworks often produce very different conclusions about the proper course of action. There are rarely unambiguously correct outcomes; instead, there are only right ways of justifying them. This makes talk of ‘reliability’ suspect. For many of the most morally consequential and controversial applications of ML, to know that an automated system works properly just is to know and be satisfied with its reasons for deciding.

Monday, June 8, 2020

One Nation Under Guard

Samuel Bowles and Arjun Jayadev
The New York Times
Originally posted 15 Feb 2014
(and still relevant today)

Here is an excerpt:

What is happening in America today is both unprecedented in our history, and virtually unique among Western democratic nations. The share of our labor force devoted to guard labor has risen fivefold since 1890 — a year when, in case you were wondering, the homicide rate was much higher than today.

Is this the curse of affluence? Or of ethnic diversity? We don’t think so. The guard-labor share of employment in the United States is four times what it is in Sweden, where living standards rival America’s. And Britain, with its diverse population, uses substantially less guard labor than the United States.

In America, growing inequality has been accompanied by a boom in gated communities and armies of doormen controlling access to upscale apartment buildings. We did not count the doormen, or those producing the gates, locks and security equipment. One could quibble about the numbers; we have elsewhere adopted a broader definition, including prisoners, work supervisors with disciplinary functions, and others.

But however one totes up guard labor in the United States, there is a lot of it, and it seems to go along with economic inequality. States with high levels of income inequality — New York and Louisiana — employ twice as many security workers (as a fraction of their labor force) as less unequal states like Idaho and New Hampshire.

When we look across advanced industrialized countries, we see the same pattern: the more inequality, the more guard labor. As the graph shows, the United States leads in both.

The info is here.

Wednesday, August 14, 2019

Getting AI ethics wrong could 'annihilate technical progress'

Richard Gray
TechXplore
Originally published July 30, 2019

Here is an excerpt:

Biases

But these algorithms can also learn the biases that already exist in data sets. If a police database shows that mainly young, black men are arrested for a certain crime, it may not be a fair reflection of the actual offender profile and instead reflect historic racism within a force. Using AI taught on this kind of data could exacerbate problems such as racism and other forms of discrimination.

"Transparency of these algorithms is also a problem," said Prof. Stahl. "These algorithms do statistical classification of data in a way that makes it almost impossible to see how exactly that happened." This raises important questions about how legal systems, for example, can remain fair and just if they start to rely upon opaque 'black box' AI algorithms to inform sentencing decisions or judgements about a person's guilt.

The next step for the project will be to look at potential interventions that can be used to address some of these issues. It will look at where guidelines can help ensure AI researchers build fairness into their algorithms, where new laws can govern their use and if a regulator can keep negative aspects of the technology in check.

But one of the problems many governments and regulators face is keeping up with the fast pace of change in new technologies like AI, according to Professor Philip Brey, who studies the philosophy of technology at the University of Twente, in the Netherlands.

"Most people today don't understand the technology because it is very complex, opaque and fast moving," he said. "For that reason it is hard to anticipate and assess the impacts on society, and to have adequate regulatory and legislative responses to that. Policy is usually significantly behind."

The info is here.

Wednesday, April 10, 2019

Gov. Newsom to order halt to California’s death penalty

Bob Egelko and Alexei Koseff
San Francisco Chronicle
Originally posted March 12, 2019

Gov. Gavin Newsom is suspending the death penalty in California, calling it discriminatory and immoral, and is granting reprieves to the 737 condemned inmates on the nation’s largest Death Row.

“I do not believe that a civilized society can claim to be a leader in the world as long as its government continues to sanction the premeditated and discriminatory execution of its people,” Newsom said in a statement accompanying an executive order, to be issued Wednesday, declaring a moratorium on capital punishment in the state. “The death penalty is inconsistent with our bedrock values and strikes at the very heart of what it means to be a Californian.”

He plans to order an immediate shutdown of the death chamber at San Quentin State Prison, where the last execution was carried out in 2006. Newsom is also withdrawing California’s recently revised procedures for executions by lethal injection, ending — at least for now — the struggle by prison officials for more than a decade to devise procedures that would pass muster in federal court by minimizing the risk of a botched and painful execution.

The info is here.

Monday, March 18, 2019

The college admissions scandal is a morality play

Elaine Ayala
San Antonio Express-News
Originally posted March 16, 2019

The college admission cheating scandal that raced through social media and dominated news cycles this week wasn’t exactly shocking: Wealthy parents rigged the system for their underachieving children.

It’s an ancient morality play set at elite universities with an unseemly cast of characters: spoiled teens and shameless parents; corrupt test proctors and paid test takers; as well as college sports officials willing to be bribed and a ring leader who ultimately turned on all of them.

William “Rick” Singer, who went to college in San Antonio, wore a wire to cooperate with FBI investigators.

(cut)

Yet even though they were arrested, the 50 people involved managed to secure the best possible outcome under the circumstances. Unlike many caught shoplifting or possessing small amounts of marijuana and who lack the lawyers and resources to help them navigate the legal system, the accused parents and coaches quickly posted bond and were promptly released without spending much time in custody.

The info is here.

Saturday, November 3, 2018

Just deserts

A Conversation Between Dan Dennett and Gregg Caruso
aeon.co
Originally published October 4, 2018

Here is an excerpt:

There are additional concerns as well. As I argue in my Public Health and Safety (2017), the social determinants of criminal behaviour are broadly similar to the social determinants of health. In that work, and elsewhere, I advocate adopting a broad public-health approach for identifying and taking action on these shared social determinants. I focus on how social inequities and systemic injustices affect health outcomes and criminal behaviour, how poverty affects brain development, how offenders often have pre-existing medical conditions (especially mental-health issues), how homelessness and education affects health and safety outcomes, how environmental health is important to both public health and safety, how involvement in the criminal justice system itself can lead to or worsen health and cognitive problems, and how a public-health approach can be successfully applied within the criminal justice system. I argue that, just as it is important to identify and take action on the social determinants of health if we want to improve health outcomes, it is equally important to identify and address the social determinants of criminal behaviour. My fear is that the system of desert you want to preserve leads us to myopically focus on individual responsibility and ultimately prevents us from addressing the systemic causes of criminal behaviour.

Consider, for example, the crazed reaction to [the then US president Barack] Obama’s claim that, ‘if you’ve got a [successful] business, you didn’t build that’ alone. The Republicans were so incensed by this claim that they dedicated the second day of the 2012 Republican National Convention to the theme ‘We Built it!’ Obama’s point, though, was simple, innocuous, and factually correct. To quote him directly: ‘If you’ve been successful, you didn’t get there on your own.’ So, what’s so threatening about this? The answer, I believe, lies in the notion of just deserts. The system of desert keeps alive the belief that if you end up in poverty or prison, this is ‘just’ because you deserve it. Likewise, if you end up succeeding in life, you and you alone are responsible for that success. This way of thinking keeps us locked in the system of blame and shame, and prevents us from addressing the systemic causes of poverty, wealth-inequality, racism, sexism, educational inequity and the like. My suggestion is that we move beyond this, and acknowledge that the lottery of life is not always fair, that luck does not average out in the long run, and that who we are and what we do is ultimately the result of factors beyond our control.

The info is here.

I clipped out the more social-psychological aspect of the conversation.  There is a much broader, philosophical component regarding free will earlier in the conversation.

Friday, July 27, 2018

Morality in the Machines

Erick Trickery
Harvard Law Bulletin
Originally posted June 26, 2018

Here is an excerpt:

In February, the Harvard and MIT researchers endorsed a revised approach in the Massachusetts House’s criminal justice bill, which calls for a bail commission to study risk-assessment tools. In late March, the House-Senate conference committee included the more cautious approach in its reconciled criminal justice bill, which passed both houses and was signed into law by Gov. Charlie Baker in April.

Meanwhile, Harvard and MIT scholars are going still deeper into the issue. Bavitz and a team of Berkman Klein researchers are developing a database of governments that use risk scores to help set bail. It will be searchable to see whether court cases have challenged a risk-score tool’s use, whether that tool is based on peer-reviewed scientific literature, and whether its formulas are public.

Many risk-score tools are created by private companies that keep their algorithms secret. That lack of transparency creates due-process concerns, says Bavitz. “Flash forward to a world where a judge says, ‘The computer tells me you’re a risk score of 5 out of 7.’ What does it mean? I don’t know. There’s no opportunity for me to lift up the hood of the algorithm.” Instead, he suggests governments could design their own risk-assessment algorithms and software, using staff or by collaborating with foundations or researchers.

Students in the ethics class agreed that risk-score programs shouldn’t be used in court if their formulas aren’t transparent, according to then HLS 3L Arjun Adusumilli. “When people’s liberty interests are at stake, we really expect a certain amount of input, feedback and appealability,” he says. “Even if the thing is statistically great, and makes good decisions, we want reasons.”

The information is here.

Saturday, May 19, 2018

County Jail or Psychiatric Hospital? Ethical Challenges in Correctional Mental Health Care

Andrea G. Segal, Rosemary Frasso, Dominic A. Sisti
Qualitative Health Research
First published March 21, 2018

Abstract

Approximately 20% of the roughly 2.5 million individuals incarcerated in the United States have a serious mental illness (SMI). As a result of their illnesses, these individuals are often more likely to commit a crime, end up incarcerated, and languish in correctional settings without appropriate treatment. The objective of the present study was to investigate how correctional facility personnel reconcile the ethical challenges that arise when housing and treating individuals with SMI. Four focus groups and one group interview were conducted with employees (n = 24) including nurses, clinicians, correctional officers, administrators, and sergeants at a county jail in Pennsylvania. Results show that jail employees felt there are too many inmates with SMI in jail who would benefit from more comprehensive treatment elsewhere; however, given limited resources, employees felt they were doing the best they can. These findings can inform mental health management and policy in a correctional setting.

The information is here.

Thursday, April 12, 2018

CA’s Tax On Millionaires Yields Big Benefits For People With Mental Illness

Anna Gorman
Kaiser Health News
Originally published March 14, 2018

A statewide tax on the wealthy has significantly boosted mental health programs in California’s largest county, helping to reduce homelessness, incarceration and hospitalization, according to a report released Tuesday.

Revenue from the tax, the result of a statewide initiative passed in 2004, also expanded access to therapy and case management to almost 130,000 people up to age 25 in Los Angeles County, according to the report by the Rand Corp. Many were poor and from minority communities, the researchers said.

“Our results are encouraging about the impact these programs are having,” said Scott Ashwood, one of the authors and an associate policy researcher at Rand. “Overall we are seeing that these services are reaching a vulnerable population that needs them.”

The positive findings came just a few weeks after a critical state audit accused California counties of hoarding the mental health money — and the state of failing to ensure that the money was being spent. The February audit said that the California Department of Health Care Services allowed local mental health departments to accumulate $231 million in unspent funds by the end of the 2015-16 fiscal year — which should have been returned to the state because it was not spent in the allowed time frame.

Proposition 63, now known as the Mental Health Services Act, imposed a 1 percent tax on people who earn more than $1 million annually to pay for expanded mental health care in California. The measure raises about $2 billion each year for such services, such as preventing mental illness from progressing, reducing stigma and improving treatment. Altogether, counties have received $16.53 billion.

The information is here.

Saturday, January 6, 2018

The Myth of Responsibility

Raoul Martinez
RSA.org
Originally posted December 7, 2017

Are we wholly responsible for our actions? We don’t choose our brains, our genetic inheritance, our circumstances, our milieu – so how much control do we really have over our lives? Philosopher Raoul Martinez argues that no one is truly blameworthy.  Our most visionary scientists, psychologists and philosophers have agreed that we have far less free will than we think, and yet most of society’s systems are structured around the opposite principle – that we are all on a level playing field, and we all get what we deserve.

4 minutes video is worth watching.....

Saturday, July 29, 2017

Ethics and Governance AI Fund funnels $7.6M to Harvard, MIT and independent research efforts

Devin Coldewey
Tech Crunch
Originally posted July 11, 2017

A $27 million fund aimed at applying artificial intelligence to the public interest has announced the first targets for its beneficence: $7.6 million will be split unequally between MIT’s Media Lab, Harvard’s Berkman Klein Center and seven smaller research efforts around the world.

The Ethics and Governance of Artificial Intelligence Fund was created by Reid Hoffman, Pierre Omidyar and the Knight Foundation back in January; the intention was to ensure that “social scientists, ethicists, philosophers, faith leaders, economists, lawyers and policymakers” have a say in how AI is developed and deployed.

To that end, this first round of fundings supports existing organizations working along those lines, as well as nurturing some newer ones.

The lion’s share of this initial round, $5.9 million, will be split by MIT and Harvard, as the initial announcement indicated. Media Lab is, of course, on the cutting edge of many research efforts in AI and elsewhere; Berkman Klein focuses more on the legal and analysis side of things.

The fund’s focuses are threefold:

  • Media and information quality – looking at how to understand and control the effects of autonomous information systems and “influential algorithms” like Facebook’s news feed.
  • Social and criminal justice – perhaps the area where the bad influence of AI-type systems could be the most insidious; biases in data and interpretation could be baked into investigative and legal systems, giving them the illusion of objectivity. (Obviously the fund seeks to avoid this.)
  • Autonomous cars – although this may seem incongruous with the others, self-driving cars represent an immense social opportunity. Mobility is one of the most influential social-economic factors, and its reinvention offers a chance to improve the condition of nearly everyone on the planet — great potential for both advancement and abuse.

Thursday, April 6, 2017

How to Upgrade Judges with Machine Learning

by Tom Simonite
MIT Press
Originally posted March 6, 2017

Here is an excerpt:

The algorithm assigns defendants a risk score based on data pulled from records for their current case and their rap sheet, for example the offense they are suspected of, when and where they were arrested, and numbers and type of prior convictions. (The only demographic data it uses is age—not race.)

Kleinberg suggests that algorithms could be deployed to help judges without major disruption to the way they currently work in the form of a warning system that flags decisions highly likely to be wrong. Analysis of judges’ performance suggested they have a tendency to occasionally release people who are very likely to fail to show in court, or to commit crime while awaiting trial. An algorithm could catch many of those cases, says Kleinberg.

Richard Berk, a professor of criminology at the University of Pennsylvania, describes the study as “very good work,” and an example of a recent acceleration of interest in applying machine learning to improve criminal justice decisions. The idea has been explored for 20 years, but machine learning has become more powerful, and data to train it more available.

Berk recently tested a system with the Pennsylvania State Parole Board that advises on the risk a person will reoffend, and found evidence it reduced crime. The NBER study is important because it looks at how machine learning can be used pre-sentencing, an area that hasn’t been thoroughly explored, he says.

The article is here.

Editor's Note: I often wonder how much time until machine learning is applied to psychotherapy.

Monday, February 6, 2017

Misguided mental health system needs an overhaul

Jim Gottstein
Alaska Dispatch News
Originally posted January 12, 2016

The glaring failures surrounding Esteban Santiago, resulting in the tragic killing of five people and wounding of eight others in Fort Lauderdale, Florida, prompts me to make some points about our misguided mental health system.

First, psychiatrists have no ability to predict who is going to be violent. In a Jan. 3, 2013, Washington Post article, "Predicting violence is a work in progress," after reviewing the research, writer David Brown, reported:

• "There is no instrument that is specifically useful or validated for identifying potential school shooters or mass murderers."

• "The best-known attempt to measure violence in mental patients found that mental illness by itself didn't predict an above-average risk of being violent."

• "(S)tudies have shown psychiatrists' accuracy in identifying patients who would become violent was slightly better than chance."

• "(T)he presence of a mental disorder (is) only a small contributor to risk, outweighed by other factors such as age, previous violent acts, alcohol use, impulsivity, gang membership and lack of family support."

The article is here.

Wednesday, November 16, 2016

The Interrogation Decision-Making Model: A General Theoretical Framework for Confessions.

Yang, Yueran; Guyll, Max; Madon, Stephanie
Law and Human Behavior, Oct 20 , 2016.

This article presents a new model of confessions referred to as the interrogation decision-making model. This model provides a theoretical umbrella with which to understand and analyze suspects’ decisions to deny or confess guilt in the context of a custodial interrogation. The model draws upon expected utility theory to propose a mathematical account of the psychological mechanisms that not only underlie suspects’ decisions to deny or confess guilt at any specific point during an interrogation, but also how confession decisions can change over time. Findings from the extant literature pertaining to confessions are considered to demonstrate how the model offers a comprehensive and integrative framework for organizing a range of effects within a limited set of model parameters.

The article is here.

Monday, November 14, 2016

Walter Sinnott-Armstrong discusses artificial intelligence and morality

By Joyce Er
Duke Chronicle
Originally published October 25, 2016

How do we create artificial intelligence that serves mankind’s purposes? Walter Sinnott-Armstrong, Chauncey Stillman professor of practical ethics, led a discussion Monday on the subject.

Through an open discussion funded by the Future of Life Institute, Sinnott-Armstrong raised issues at the intersection of computer science and ethical philosophy. Among the tricky questions Sinnott-Armstrong tackled were programming artificial intelligence so that it would not eliminate the human race as well as the legal and moral issues involving self-driving cars.

Sinnott-Armstrong noted that artificial intelligence and morality are not as irreconcilable as some might believe, despite one being regarded as highly structured and the other seen as highly subjective. He highlighted various uses for artificial intelligence in resolving moral conflicts, such as improving criminal justice and locating terrorists.

The article is here.

Sunday, November 6, 2016

The Psychology of Disproportionate Punishment

Daniel Yudkin
Scientific American
Originally published October 18, 2016

Here is an excerpt:

These studies suggest that certain features of the human mind are prone to “intergroup bias” in punishment. While our slow, thoughtful deliberative side may desire to maintain strong standards of fairness and equality, our more basic, reflexive side may be prone to hostility and aggression to anyone deemed an outsider.

Indeed, this is consistent with what we know about the evolutionary heritage of our species, which spent thousands of years in tightly knit tribal groups competing for scarce resources on the African savannah. Intergroup bias may be tightly woven up in the fabric of everyone’s DNA, ready to emerge under conditions of hurry or stress.

But the picture of human relationships is not all bleak. Indeed, another line of research in which I am involved, led by Avital Mentovich, sheds light on the ways we might transcend the biases that lurk beneath the surface of the psyche.

The article is here.