Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Evidence. Show all posts
Showing posts with label Evidence. Show all posts

Saturday, February 10, 2024

How to think like a Bayesian

Michael Titelbaum
psyche.co
Originally posted 10 Jan 24

You’re often asked what you believe. Do you believe in God? Do you believe in global warming? Do you believe in life after love? And you’re often told that your beliefs are central to who you are, and what you should do: ‘Do what you believe is right.’

These belief-questions demand all-or-nothing answers. But much of life is more complicated than that. You might not believe in God, but also might not be willing to rule out the existence of a deity. That’s what agnosticism is for.

For many important questions, even three options aren’t enough. Right now, I’m trying to figure out what kinds of colleges my family will be able to afford for my children. My kids’ options will depend on lots of variables: what kinds of schools will they be able to get into? What kinds of schools might be a good fit for them? If we invest our money in various ways, what kinds of return will it earn over the next two, five, or 10 years?

Suppose someone tried to help me solve this problem by saying: ‘Look, it’s really simple. Just tell me, do you believe your oldest daughter will get into the local state school, or do you believe that she won’t?’ I wouldn’t know what to say to that question. I don’t believe that she will get into the school, but I also don’t believe that she won’t. I’m perhaps slightly more confident than 50-50 that she will, but nowhere near certain.

One of the most important conceptual developments of the past few decades is the realisation that belief comes in degrees. We don’t just believe something or not: much of our thinking, and decision-making, is driven by varying levels of confidence. These confidence levels can be measured as probabilities, on a scale from zero to 100 per cent. When I invest the money I’ve saved for my children’s education, it’s an oversimplification to focus on questions like: ‘Do I believe that stocks will outperform bonds over the next decade, or not?’ I can’t possibly know that. But I can try to assign educated probability estimates to each of those possible outcomes, and balance my portfolio in light of those estimates.

(cut)

Key points – How to think like a Bayesian
  1. Embrace the margins. It’s rarely rational to be certain of anything. Don’t confuse the improbable with the impossible. When thinking about extremely rare events, try thinking in odds instead of percentages.
  2. Evidence supports what makes it probable. Evidence supports the hypotheses that make the evidence likely. Increase your confidence in whichever hypothesis makes the evidence you’re seeing most probable.
  3. Attend to all your evidence. Consider all the evidence you possess that might be relevant to a hypothesis. Be sure to take into account how you learned what you learned.
  4. Don’t forget your prior opinions. Your confidence after learning some evidence should depend both on what that evidence supports and on how you saw things before it came in. If a hypothesis is improbable enough, strong evidence in its favour can still leave it unlikely.
  5. Subgroups don’t always reflect the whole. Even if a trend obtains in every subpopulation, it might not hold true for the entire population. Consider how traits are distributed across subgroups as well.

Tuesday, June 1, 2021

We Must Rethink the Role of Medical Expert Witnesses


Amitha Kalaichandran
Scientific American
Originally posted 5 May 21

Here are two excerpts:

The second issue is that the standard used by the courts to assess whether an expert witness’s scientific testimony can be included differs by state. Several states (including Minnesota) use the Frye Rule, established in 1923, which asks whether the expert’s assessment is generally accepted by the scientific community that specializes in this narrow field of expertise. Federally, and in several other states, the Daubert Standard of 1993 is used, which dictates the expert show their scientific reasoning (so the determination of validity is left to the courts), though acceptance within the scientific community is still a factor. Each standard has its drawbacks. For instance, in Frye, the expert’s community could be narrowly drawn by the legal team in a way that helps bolster the expert’s outdated or rare perspective, and the Daubert standard presumes that the judge and jury have an understanding of the science in order to independently assess scientific validity. Some states also strictly apply the standard, whereas others are more flexible. (The Canadian approach is derived from the case R v. Mohan, which states the expert be qualified and their testimony be relevant, but the test for “reliability” is left to the courts).

Third, when it comes to assessments of cause of death specifically, understanding the distinction between necessary and sufficient is important. Juries can have a hard time teasing out the difference. In the Chauvin trial, the medical expert witnesses testifying on behalf of the prosecution were aligned in their assessment of what killed Floyd: the sustained pressure of the officer’s knee on Floyd’s neck (note that asphyxia is a common cause of cardiac arrest). However, David Fowler, the medical expert witness for the defense, suggested the asphyxia was secondary to heart disease and drug intoxication as meaningful contributors to his death.

(cut)

Another improvement could involve ensuring that courts institute a more stringent application and selection process, in which medical expert witnesses would be required to demonstrate their clinical and research competence related to the specific issues in a case, and where their abilities are recognized by their professional group. For example, the American College of Cardiology could endorse a cardiologist as a leader in a relevant subspecialty—a similar approach has been suggested as a way to reform medical expert witness testimony by emergency physicians. One drawback, according to Faigman, is that courts would be unlikely to fully abdicate their role in evaluating expertise.

Sunday, October 18, 2020

Beliefs have a social purpose. Does this explain delusions?

Anna Greenburgh
psyche.co
Originally published 

Here is an excerpt:

Of course, just because a delusion has logical roots doesn’t mean it’s helpful for the person once it takes hold. Indeed, this is why delusions are an important clinical issue. Delusions are often conceptualised as sitting at the extreme end of a continuum of belief, but how can they be distinguished from other beliefs? If not irrationality, then what demarcates a delusion?

Delusions are fixed, unchanging in the face of contrary evidence, and not shared by the person’s peers. In light of the social function of beliefs, these preconditions have added significance. The coalitional model underlines that beliefs arising from adaptive cognitive processes should show some sensitivity to social context and enable successful social coordination. Delusions lack this social function and adaptability. Clinical psychologists have documented the fixity of delusional beliefs: they are more resistant to change than other types of belief, and are intensely preoccupying, regardless of the social context or interpersonal consequences. In both ‘The Yellow Wallpaper’ and the novel Don Quixote (1605-15) by Miguel de Cervantes, the protagonists’ beliefs about their surroundings are unchangeable and, if anything, become increasingly intense and disruptive. It is this inflexibility to social context, once they take hold, that sets delusions apart from other beliefs.

Across the field of mental health, research showing the importance of the social environment has spurred a great shift in the way that clinicians interact with patients. For example, research exposing the link between trauma and psychosis has resulted in more compassionate, person-centred approaches. The coalitional model of delusions can now contribute to this movement. It opens up promising new avenues of research, which integrate our fundamental social nature and the social function of belief formation. It can also deepen how people experiencing delusions are understood – instead of contributing to stigma by dismissing delusions as irrational, it considers the social conditions that gave rise to such intensely distressing beliefs.

Monday, December 2, 2019

Neuroscientific evidence in the courtroom: a review.

Image result for neuroscience evidence in the courtroom"Aono, D., Yaffe, G. & Kober, H.
Cogn. Research 4, 40 (2019)
doi:10.1186/s41235-019-0179-y

Abstract

The use of neuroscience in the courtroom can be traced back to the early twentieth century. However, the use of neuroscientific evidence in criminal proceedings has increased significantly over the last two decades. This rapid increase has raised questions, among the media as well as the legal and scientific communities, regarding the effects that such evidence could have on legal decision makers. In this article, we first outline the history of neuroscientific evidence in courtrooms and then we provide a review of recent research investigating the effects of neuroscientific evidence on decision-making broadly, and on legal decisions specifically. In the latter case, we review studies that measure the effect of neuroscientific evidence (both imaging and nonimaging) on verdicts, sentencing recommendations, and beliefs of mock jurors and judges presented with a criminal case. Overall, the reviewed studies suggest mitigating effects of neuroscientific evidence on some legal decisions (e.g., the death penalty). Furthermore, factors such as mental disorder diagnoses and perceived dangerousness might moderate the mitigating effect of such evidence. Importantly, neuroscientific evidence that includes images of the brain does not appear to have an especially persuasive effect (compared with other neuroscientific evidence that does not include an image). Future directions for research are discussed, with a specific call for studies that vary defendant characteristics, the nature of the crime, and a juror’s perception of the defendant, in order to better understand the roles of moderating factors and cognitive mediators of persuasion.

Significance

The increased use of neuroscientific evidence in criminal proceedings has led some to wonder what effects such evidence has on legal decision makers (e.g., jurors and judges) who may be unfamiliar with neuroscience. There is some concern that legal decision makers may be unduly influenced by testimony and images related to the defendant’s brain. This paper briefly reviews the history of neuroscientific evidence in the courtroom to provide context for its current use. It then reviews the current research examining the influence of neuroscientific evidence on legal decision makers and potential moderators of such effects. Our synthesis of the findings suggests that neuroscientific evidence has some mitigating effects on legal decisions, although neuroimaging-based evidence does not hold any special persuasive power. With this in mind, we provide recommendations for future research in this area. Our review and conclusions have implications for scientists, legal scholars, judges, and jurors, who could all benefit from understanding the influence of neuroscientific evidence on judgments in criminal cases.

Sunday, September 29, 2019

The brain, the criminal and the courts

A graph shows the number of mentions of neuroscience in judicial opinions in US cases from 2005 to 2015. Capital and noncapital homicides are shown, as well as other felonies. For the three categories added together, the authors found 101 mentions in 2005 and more than 400 in 2015. All three categories show growth.Eryn Brown
knowablemagazine.org
Originally posted August 30, 2019

Here is an excerpt:

It remains to be seen if all this research will yield actionable results. In 2018, Hoffman, who has been a leader in neurolaw research, wrote a paper discussing potential breakthroughs and dividing them into three categories: near term, long term and “never happening.” He predicted that neuroscientists are likely to improve existing tools for chronic pain detection in the near future, and in the next 10 to 50 years he believes they’ll reliably be able to detect memories and lies, and to determine brain maturity.

But brain science will never gain a full understanding of addiction, he suggested, or lead courts to abandon notions of responsibility or free will (a prospect that gives many philosophers and legal scholars pause).

Many realize that no matter how good neuroscientists get at teasing out the links between brain biology and human behavior, applying neuroscientific evidence to the law will always be tricky. One concern is that brain studies ordered after the fact may not shed light on a defendant’s motivations and behavior at the time a crime was committed — which is what matters in court. Another concern is that studies of how an average brain works do not always provide reliable information on how a specific individual’s brain works.

“The most important question is whether the evidence is legally relevant. That is, does it help answer a precise legal question?” says Stephen J. Morse, a scholar of law and psychiatry at the University of Pennsylvania. He is in the camp who believe that neuroscience will never revolutionize the law, because “actions speak louder than images,” and that in a legal setting, “if there is a disjunct between what the neuroscience shows and what the behavior shows, you’ve got to believe the behavior.” He worries about the prospect of “neurohype,” and attorneys who overstate the scientific evidence.

The info is here.

Tuesday, August 27, 2019

Neuroscience and mental state issues in forensic assessment

David Freedman and Simona Zaami
International Journal of Law and Psychiatry
Available online 2 April 2019

Abstract

Neuroscience has already changed how the law understands an individual's cognitive processes, how those processes shape behavior, and how bio-psychosocial history and neurodevelopmental approaches provide information, which is critical to understanding mental states underlying behavior, including criminal behavior. In this paper, we briefly review the state of forensic assessment of mental conditions in the relative culpability of criminal defendants, focused primarily on the weaknesses of current approaches. We then turn to focus on neuroscience approaches and how they have the potential to improve assessment, but with significant risks and limitations.

From the Conclusion:

This approach is not a cure-all. Understanding and explaining specific behaviors is a difficult undertaking, and explaining the mental condition of the person engaged in those behaviors at the time the behaviors took place is even more difficult. Yet, the law requires some degree of reliability and rigorous, honest presentation of the strengths and weaknesses of the science being relied upon to form opinions.  Despite the dramatic advances understanding the neural bases of cognition and functioning, neuroscience does not yet reliably describe how those processes emerge in a specific environmental context (Poldrack et al., 2018), nor what an individual was thinking, feeling, experiencing, understanding, or intending at a particular moment in time (Freedman & Woods, 2018; Greely & Farahany, 2019).

The info is here.

Thursday, May 23, 2019

Pre-commitment and Updating Beliefs

Charles R. Ebersole
Doctoral Dissertation, University of Virginia

Abstract

Beliefs help individuals make predictions about the world. When those predictions are incorrect, it may be useful to update beliefs. However, motivated cognition and biases (notably, hindsight bias and confirmation bias) can instead lead individuals to reshape interpretations of new evidence to seem more consistent with prior beliefs. Pre-committing to a prediction or evaluation of new evidence before knowing its results may be one way to reduce the impact of these biases and facilitate belief updating. I first examined this possibility by having participants report predictions about their performance on a challenging anagrams task before or after completing the task. Relative to those who reported predictions after the task, participants who pre-committed to predictions reported predictions that were more discrepant from actual performance and updated their beliefs about their verbal ability more (Studies 1a and 1b). The effect on belief updating was strongest among participants who directly tested their predictions (Study 2) and belief updating was related to their evaluations of the validity of the task (Study 3). Furthermore, increased belief updating seemed to not be due to faulty or shifting memory of initial ratings of verbal ability (Study 4), but rather reflected an increase in the discrepancy between predictions and observed outcomes (Study 5). In a final study (Study 6), I examined pre-commitment as an intervention to reduce confirmation bias, finding that pre-committing to evaluations of new scientific studies eliminated the relation between initial beliefs and evaluations of evidence while also increasing belief updating. Together, these studies suggest that pre-commitment can reduce biases and increase belief updating in light of new evidence.

The dissertation is here.

Friday, November 9, 2018

Believing without evidence is always morally wrong

Francisco Mejia Uribe
aeon.co
Originally posted November 5, 2018

Here are two excerpts:

But it is not only our own self-preservation that is at stake here. As social animals, our agency impacts on those around us, and improper believing puts our fellow humans at risk. As Clifford warns: ‘We all suffer severely enough from the maintenance and support of false beliefs and the fatally wrong actions which they lead to …’ In short, sloppy practices of belief-formation are ethically wrong because – as social beings – when we believe something, the stakes are very high.

(cut)

Translating Clifford’s warning to our interconnected times, what he tells us is that careless believing turns us into easy prey for fake-news peddlers, conspiracy theorists and charlatans. And letting ourselves become hosts to these false beliefs is morally wrong because, as we have seen, the error cost for society can be devastating. Epistemic alertness is a much more precious virtue today than it ever was, since the need to sift through conflicting information has exponentially increased, and the risk of becoming a vessel of credulity is just a few taps of a smartphone away.

Clifford’s third and final argument as to why believing without evidence is morally wrong is that, in our capacity as communicators of belief, we have the moral responsibility not to pollute the well of collective knowledge. In Clifford’s time, the way in which our beliefs were woven into the ‘precious deposit’ of common knowledge was primarily through speech and writing. Because of this capacity to communicate, ‘our words, our phrases, our forms and processes and modes of thought’ become ‘common property’. Subverting this ‘heirloom’, as he called it, by adding false beliefs is immoral because everyone’s lives ultimately rely on this vital, shared resource.

The info is here.

Sunday, July 15, 2018

Should the police be allowed to use genetic information in public databases to track down criminals?

Bob Yirka
Phys.org
Originally posted June 8, 2018

Here is an excerpt:

The authors point out that there is no law forbidding what the police did—the genetic profiles came from people who willingly and of their own accord gave up their DNA data. But should there be? If you send a swab to Ancestry.com, for example, should the genetic profile they create be off-limits to anyone but you and them? It is doubtful that many who take such actions fully consider the ways in which their profile might be used. Most such companies routinely sell their data to pharmaceutical companies or others looking to use the data to make a profit, for example. Should they also be compelled to give up such data due to a court order? The authors suggest that if the public wants their DNA information to remain private, they need to contact their representatives and demand that legislation that lays out specific rules for data housed in public databases.

The article is here.

Monday, April 23, 2018

Bad science puts innocent people in jail — and keeps them there

Radley Balko and Tucker Carrington
The Washington Post
Originally posted March 21, 2018

Here is an excerpt:

At the trial level, juries hear far too much dubious science, whether it’s an unproven field like bite mark matching or blood splatter analysis, exaggerated claims in a field like hair fiber analysis, or analysts testifying outside their area of expertise.  It’s difficult to say how many convictions have involved faulty or suspect forensics, but the FBI estimated in 2015 that its hair fiber analysts had testified in about 3,000 cases — and that’s merely one subspecialty of forensics, and only at the federal level.    Extrapolating from the database of DNA exonerations, the Innocence Project estimates that bad forensics contributes to about 45 percent of wrongful convictions.

But flawed evidence presented at trial is only part of the problem.  Even once a field of forensics or a particular expert has been discredited, the courts have made it extremely difficult for those convicted by bad science to get a new trial.

The Supreme Court makes judges responsible for determining what is good science.  They already decide what evidence is allowed at trial, so asking them to do the same for expert testimony may seem intuitive.  But judges are trained to do legal analyses, not scientific ones.  They generally deal with challenges to expert testimony by looking at what other judges have said.  If a previous court has allowed a field of forensic evidence, subsequent courts will, too.

The article is here.

Note: These issues also apply to psychologists in the courtroom.

Sunday, December 17, 2017

The Impenetrable Program Transforming How Courts Treat DNA

Jessica Pishko
wired.com
Originally posted November 29, 2017

Here is an excerpt:

But now legal experts, along with Johnson’s advocates, are joining forces to argue to a California court that TrueAllele—the seemingly magic software that helped law enforcement analyze the evidence that tied Johnson to the crimes—should be forced to reveal the code that sent Johnson to prison. This code, they say, is necessary in order to properly evaluate the technology. In fact, they say, justice from an unknown algorithm is no justice at all.

As technology progresses forward, the law lags behind. As John Oliver commented last month, law enforcement and lawyers rarely understand the science behind detective work. Over the years, various types of “junk science” have been discredited. Arson burn patterns, bite marks, hair analysis, and even fingerprints have all been found to be more inaccurate than previously thought. A September 2016 report by President Obama’s Council of Advisors on Science and Technology found that many of the common techniques law enforcement historically rely on lack common standards.

In this climate, DNA evidence has been a modern miracle. DNA remains the gold standard for solving crimes, bolstered by academics, verified scientific studies, and experts around the world. Since the advent of DNA testing, nearly 200 people have been exonerated using newly tested evidence; in some places, courts will only consider exonerations with DNA evidence. Juries, too, have become more trusting of DNA, a response known popularly as the “CSI Effect.” A number of studies suggest that the presence of DNA evidence increases the likelihood of conviction or a plea agreement.

The article is here.

Monday, May 15, 2017

Cassandra’s Regret: The Psychology of Not Wanting to Know

Gigerenzer, Gerd; Garcia-Retamero, Rocio
Psychological Review, Vol 124(2), Mar 2017, 179-196.

Abstract

Ignorance is generally pictured as an unwanted state of mind, and the act of willful ignorance may raise eyebrows. Yet people do not always want to know, demonstrating a lack of curiosity at odds with theories postulating a general need for certainty, ambiguity aversion, or the Bayesian principle of total evidence. We propose a regret theory of deliberate ignorance that covers both negative feelings that may arise from foreknowledge of negative events, such as death and divorce, and positive feelings of surprise and suspense that may arise from foreknowledge of positive events, such as knowing the sex of an unborn child. We conduct the first representative nationwide studies to estimate the prevalence and predictability of deliberate ignorance for a sample of 10 events. Its prevalence is high: Between 85% and 90% of people would not want to know about upcoming negative events, and 40% to 70% prefer to remain ignorant of positive events. Only 1% of participants consistently wanted to know. We also deduce and test several predictions from the regret theory: Individuals who prefer to remain ignorant are more risk averse and more frequently buy life and legal insurance. The theory also implies the time-to-event hypothesis, which states that for the regret-prone, deliberate ignorance is more likely the nearer the event approaches. We cross-validate these findings using 2 representative national quota samples in 2 European countries. In sum, we show that deliberate ignorance exists, is related to risk aversion, and can be explained as avoiding anticipatory regret.



The article is here.

Friday, February 10, 2017

Dysfunction Disorder

Joaquin Sapien
Pro Publica
Originally published on January 17, 2017

Here is an excerpt:

The mental health professionals in both cases had been recruited by Montego Medical Consulting, a for-profit company under contract with New York City's child welfare agency. For more than a decade, Montego was paid hundreds of thousands of dollars a year by the city to produce thousands of evaluations in Family Court cases -- of mothers and fathers, spouses and children. Those evaluations were then shared with judges making decisions of enormous sensitivity and consequence: whether a child could stay at home or if they'd be safer in foster care; whether a parent should be enrolled in a counseling program or put on medication; whether parents should lose custody of their children altogether.

In 2012, a confidential review done at the behest of frustrated lawyers and delivered to the administrative judge of Family Court in New York City concluded that the work of the psychologists lined up by Montego was inadequate in nearly every way. The analysis matched roughly 25 Montego evaluations against 20 criteria from the American Psychological Association and other professional guidelines. None of the Montego reports met all 20 criteria. Some met as few as five. The psychologists used by Montego often didn't actually observe parents interacting with children. They used outdated or inappropriate tools for psychological assessments, including one known as a "projective drawing" exercise.

(cut)

Attorneys and psychologists who have worked in Family Court say judges lean heavily on assessments made by psychologists, often referred to as "forensic evaluators." So do judges themselves.

"In many instances, judges rely on forensic evaluators more than perhaps they should," said Jody Adams, who served as a Family Court judge in New York City for nearly 20 years before leaving the bench in 2012. "They should have more confidence in their own insight and judgment. A forensic evaluator's evidence should be a piece of the judge's decision, but not determinative. These are unbelievably difficult decisions; these are not black and white; they are filled with gray areas and they have lifelong consequences for children and their families. So it's human nature to want to look for help where you can get it."

The article is here.

Tuesday, May 24, 2016

Junk Science on Trial

Jordan Smith
The Intercept
Originally posted May 6 2016

Here is an excerpt:

Expert Infallibility?

The Supreme Court's opinion makes little sense if you consider it critically. Under the court's reasoning, a conviction could be overturned if, for example, an eyewitness to a crime later realized he was wrong about what he saw. But if an expert who testified that DNA evidence belonged to one person later realized that the DNA belonged to someone else, nothing could be done to remedy that error, even if it was responsible for a conviction.

In the wake of that opinion, and with Richards's case firmly in mind, lawyers from across the state asked for a change in law -- one that would make it clear that a conviction can be overturned when experts recant their prior testimony as a result of scientific or technological advances.

Known as a junk science statute, the Bill Richards Bill changed the state penal code to address problematic forensic practices in individual criminal cases. Faulty forensics have been implicated in nearly half of all DNA exonerations, according to the Innocence Project, and in roughly 23 percent of all wrongful convictions, according to the National Registry of Exonerations. California's bill, which passed with bipartisan support, is only the second such statute in the country (following one in Texas), and its passage propelled the Richards case back to the Supreme Court for further consideration.

The article is here.

Sunday, April 3, 2016

When Self-Report Trumps Science: Confessions, DNA, & Prosecutorial Theories on Perceptions of Guilt

Sara Appleby and Saul Kassin
Psychology, Public Policy, and Law, Mar 10 , 2016

Abstract

For many wrongfully convicted individuals, DNA testing presents a new and invaluable
means of exoneration. In several recently documented cases, however, innocent confessors were
tried and convicted despite DNA evidence that excluded them. In each of these cases, the
prosecutor proposed a speculative theory to explain away the mismatched confession and
exculpatory DNA. Three studies were conducted that pitted confessions against DNA test
results. Study 1 showed that people in general trust DNA evidence far more than self-report,
including a defendant’s confession. Using student and adult community samples, Studies 2 and 3
showed that in cases in which the defendant had confessed to police but was later exculpated by
DNA, prosecutorial theories spun to reconcile the contradiction attenuated the power of
exculpatory DNA, significantly increasing perceptions of the defendant's culpability, the rate of
conviction, and the self-reported influence of the confession. Implications and suggestions for
reform are discussed.

The cited article is here.

Access to the article is here.

Sunday, March 27, 2016

Reversing the legacy of junk science in the courtroom

By Kelly Servick
Science Magazine
Originally published March 7, 2016

Here is an excerpt:

Testing examiner accuracy using known samples can give the judge or jury a sense of general error rates in a field, but it can’t describe the level of uncertainty around a specific piece of evidence. Right now, only DNA identification includes that measure of uncertainty. (DNA analyses are based on 13 genetic variants, or alleles, that are statistically independent, and known to vary widely among individuals.) Mixtures of genetic material from multiple people can complicate the analysis, but DNA profiling is “a relatively easy statistical problem to solve,” says Nicholas Petraco, an applied mathematician at City University of New York’s John Jay College of Criminal Justice in New York City. Pattern evidence doesn’t operate under the same rules, he says. “What’s an allele on a tool mark?”; “What’s an allele on a hair or fiber?”

The article is here.

Note: This article addresses evidence such as fingerprints, that can have error. What does this say about neurological or psychological "evidence" in terms of accuracy, validity, and reliability?

Friday, February 12, 2016

Growing use of neurobiological evidence in criminal trials, new study finds

By Emily Underwood
Science
Originally posted January 21, 2016

Here is an excerpt:

Overall, the new study suggests that neurobiological evidence has improved the U.S. criminal justice system “through better determinations of competence and considerations about the role of punishment,” says Judy Illes, a neuroscientist at the University of British Columbia, Vancouver, in Canada. That is not Farahany’s interpretation, however. With a few notable exceptions, use of neurobiological evidence in courtrooms “continues to be haphazard, ad hoc, and often ill conceived,” she and her colleagues write. Lawyers rarely heed scientists’ cautions “that the neurobiological evidence at issue is weak, particularly for making claims about individuals rather than studying between-group differences,” they add.

The article is here.

Tuesday, November 10, 2015

Federal judge says neuroscience is not ready for the courtroom--yet

By Kevin Davis
ABA Journal
Originally published October 20, 2015

Here is an excerpt:

Rakoff, who long has had an interest in neuroscience and is a founding member of the MacArthur Foundation Research Network on Law and Neuroscience, says that judges are still cautious about allowing neuroscientific evidence in court. Criminal lawyers, for example, have introduced brain scans to show a defendant’s brain dysfunction, most often as mitigation in death penalty hearings. Lawyers also have tried to introduce brain scans to prove the existence of pain and as evidence for lie detection.

“The attitude of judges toward neuroscience is one of ambivalence and skepticism,” Rakoff said. “You ask them about the hippocampus, they say it’s something at the zoo.”

The entire article is here.

Thursday, January 8, 2015

Framed by forensics

Junky, out-of-date science fuels jury errors and tragic miscarriages of justice. How can we throw it out of court?

By Douglas Starr
Aeon Magazine
Originally published

Here is an excerpt:

Rivera’s case represents a tragic miscarriage of justice. Seen another way, it’s also the result of bad science and anti-scientific thinking – from the police’s coercive interview of a vulnerable person, to the jury’s acceptance of a false confession over physical evidence, including DNA.

Unfortunately, Rivera’s case is not unique. Hundreds of innocent people have been convicted by bad science, permitting an equal number of perpetrators to go free. It’s impossible to know how often this happens, but the growing number of DNA-related exonerations points to false convictions as the collateral damage of our legal system. Part of the problem involves faulty forensics: contrary to what we might see in the CSI drama shows on TV, few forensic labs are state-of-the-art, and they don’t always use scientific techniques. According to the US National Academy of Sciences, none of the traditional forensic techniques, such as hair comparison, bite-mark analysis or ballistics analysis, qualifies as rigorous, reproducible science. But it’s not just forensics: bad science is marbled throughout our legal system, from the way police interrogate suspects to the decisions judges make on whether to admit certain evidence in court.

The entire article is here.

Saturday, June 14, 2014

Psychological Science's Replicability Crisis and What It Means for Science in the Courtroom

By Jason Michael Chin
Journal of Psychology, Public Policy, and Law (Forthcoming)

Abstract:  
 
In response to what has been termed the “replicability crisis,” great changes are currently under way in how science is conducted and disseminated. Indeed, major journals are changing the way in which they evaluate science. Therefore, a question arises over how such change impacts law’s treatment of scientific evidence. The present standard for the admissibility of scientific evidence in federal courts asks judges to play the role of gatekeeper, determining if the proffered evidence conforms with several indicia of scientific validity. The alternative legal framework, and one still used by several state courts, requires judges to simply evaluate whether a scientific finding or practice is generally accepted within science.

This Essay suggests that as much as the replicability crisis has highlighted serious issues in the scientific process, it has should have similar implications and actionable consequences for legal practitioners and academics. In particular, generally accepted scientific practices have frequently lagged behind prescriptions for best practices, which in turn affected the way science has been reported and performed. The consequence of this phenomenon is that judicial analysis of scientific evidence will still be impacted by deficient generally accepted practices. The Essay ends with some suggestions to help ensure that legal decisions are influenced by science’s best practices.

Download the essay here.