Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Risk Assessment. Show all posts
Showing posts with label Risk Assessment. Show all posts

Wednesday, July 5, 2023

Taxonomy of Risks posed by Language Models

Weidinger, L., Uesato, J., et al. (2022, March).
In Proceedings of the 2022 ACM Conference on 
Fairness, Accountability, and Transparency
(pp. 19-30).
Association for Computing Machinery.

Abstract

Responsible innovation on large-scale Language Models (LMs) requires foresight into and in-depth understanding of the risks these models may pose. This paper develops a comprehensive taxonomy of ethical and social risks associated with LMs. We identify twenty-one risks, drawing on expertise and literature from computer science, linguistics, and the social sciences. We situate these risks in our taxonomy of six risk areas: I. Discrimination, Hate speech and Exclusion, II. Information Hazards, III. Misinformation Harms, IV. Malicious Uses, V. Human-Computer Interaction Harms, and VI. Environmental and Socioeconomic harms. For risks that have already been observed in LMs, the causal mechanism leading to harm, evidence of the risk, and approaches to risk mitigation are discussed. We further describe and analyse risks that have not yet been observed but are anticipated based on assessments of other language technologies, and situate these in the same taxonomy. We underscore that it is the responsibility of organizations to engage with the mitigations we discuss throughout the paper. We close by highlighting challenges and directions for further research on risk evaluation and mitigation with the goal of ensuring that language models are developed responsibly.

Conclusion

In this paper, we propose a comprehensive taxonomy to structure the landscape of potential ethical and social risks associated with large-scale language models (LMs). We aim to support the research programme toward responsible innovation on LMs, broaden the public discourse on ethical and social risks related to LMs, and break risks from LMs into smaller, actionable pieces to facilitate their mitigation. More expertise and perspectives will be required to continue to build out this taxonomy of potential risks from LMs. Future research may also expand this taxonomy by applying additional methods such as case studies or interviews. Next steps building on this work will be to engage further perspectives, to innovate on analysis and evaluation methods, and to build out mitigation tools, working toward the responsible innovation of LMs.


Here is a summary of each of the six categories of risks:
  • Discrimination: LLMs can be biased against certain groups of people, leading to discrimination in areas such as employment, housing, and lending.
  • Hate speech and exclusion: LLMs can be used to generate hate speech and other harmful content, which can lead to exclusion and violence.
  • Information hazards: LLMs can be used to spread misinformation, which can have a negative impact on society.
  • Misinformation harms: LLMs can be used to create deepfakes and other forms of synthetic media, which can be used to deceive people.
  • Malicious uses: LLMs can be used to carry out malicious activities such as hacking, fraud, and terrorism.
  • Human-computer interaction harms: LLMs can be used to create addictive and harmful applications, which can have a negative impact on people's mental health.
  • Environmental and socioeconomic harms: LLMs can be used to consume large amounts of energy and data, which can have a negative impact on the environment and society.

Wednesday, September 16, 2020

There are no good choices

Ezra Klein
vox.com
Originally published 14 Sept 20

Here is an excerpt:

In America, our ideological conflicts are often understood as the tension between individual freedoms and collective actions. The failure of our pandemic response policy exposes the falseness of that frame. In the absence of effective state action, we, as individuals, find ourselves in prisons of risk, our every movement stalked by disease. We are anything but free; our only liberty is to choose among a menu of awful options. And faced with terrible choices, we are turning on each other, polarizing against one another. YouTube conspiracies and social media shaming are becoming our salves, the way we wrest a modicum of individual control over a crisis that has overwhelmed us as a collective.

“The burden of decision-making and risk in this pandemic has been fully transitioned from the top down to the individual,” says Dr. Julia Marcus, a Harvard epidemiologist. “It started with [responsibility] being transitioned to the states, which then transitioned it to the local school districts — If we’re talking about schools for the moment — and then down to the individual. You can see it in the way that people talk about personal responsibility, and the way that we see so much shaming about individual-level behavior.”

But in shifting so much responsibility to individuals, our government has revealed the limits of individualism.

The risk calculation that rules, and ruins, lives

Think of coronavirus risk like an equation. Here’s a rough version of it: The danger of an act = (the transmission risk of the activity) x (the local prevalence of Covid-19) / (by your area’s ability to control a new outbreak).

Individuals can control only a small portion of that equation. People can choose safer activities over riskier ones — though the language of choice too often obscures the reality that many have no economic choice save to work jobs that put them, and their families, in danger. But the local prevalence of Covid-19 and the capacity of authorities to track and squelch outbreaks are collective functions.

The info is here.

Thursday, August 13, 2020

Every Decision Is A Risk. Every Risk Is A Decision.

Maggie Koerth
fivethirtyeight.com
Originally posted 21 July 20

Here is an excerpt:

In general, research has shown that indoors is riskier than outside, long visits riskier than short ones, crowds riskier than individuals — and, look, just avoid situations where you’re being sneezed, yelled, coughed or sung at.

But the trouble with the muddy middle is that a general idea of what is riskier isn’t the same thing as a clear delineation between right and wrong. These charts — even the best ones — aren’t absolute arbiters of safety: They’re the result of surveying experts. In the case of Popescu’s chart, the risk categorizations were assigned based on discussions among herself, Emanuel and Dr. James P. Phillips, the chief of disaster medicine at George Washington University Emergency Medicine. They each independently assigned a risk level to each activity, and then hashed out the ones on which they disagreed.

Take golf. How safe is it to go out to the links? Initially, the three experts had different risk levels assigned to this activity because they were all making different assumptions about what a game of golf naturally involved, Popescu said. “Are people doing it alone? If not, how many people are in a cart? Are they wearing masks? Are they drinking? …. those little variables that can increase the risk,” she told me.

Golf isn’t just golf. It’s how you golf that matters.

Those variables and assumptions aren’t trivial to calculating risk. Nor are they static. There’s different muck under your boggy feet in different parts of the country, at different times. For instance, how safe is it to eat outdoors with friends? Popescu’s chart ranks “outdoor picnic or porch dining” with people outside your household as low risk — a very validating categorization, personally. But a chart produced by the Texas Medical Association, based on a survey of its 53,000 physician members, rates “attending a backyard barbeque” as a moderate risk, a 5 on a scale in which 9 is the stuff most of us have no problem eschewing.

The info is here.

Monday, August 3, 2020

The Role of Cognitive Dissonance in the Pandemic

Elliot Aronson and Carol Tavris
The Atlantic
Originally published 12 July 20

Here is an excerpt:

Because of the intense polarization in our country, a great many Americans now see the life-and-death decisions of the coronavirus as political choices rather than medical ones. In the absence of a unifying narrative and competent national leadership, Americans have to choose whom to believe as they make decisions about how to live: the scientists and the public-health experts, whose advice will necessarily change as they learn more about the virus, treatment, and risks? Or President Donald Trump and his acolytes, who suggest that masks and social distancing are unnecessary or “optional”?

The cognition I want to go back to work or I want to go to my favorite bar to hang out with my friends is dissonant with any information that suggests these actions might be dangerous—if not to individuals themselves, then to others with whom they interact.

How to resolve this dissonance? People could avoid the crowds, parties, and bars and wear a mask. Or they could jump back into their former ways. But to preserve their belief that they are smart and competent and would never do anything foolish to risk their lives, they will need some self-justifications: Claim that masks impair their breathing, deny that the pandemic is serious, or protest that their “freedom” to do what they want is paramount. “You’re removing our freedoms and stomping on our constitutional rights by these Communist-dictatorship orders,” a woman at a Palm Beach County commissioners’ hearing said. “Masks are literally killing people,” said another. South Dakota Governor Kristi Noem, referring to masks and any other government interventions, said, “More freedom, not more government, is the answer.” Vice President Mike Pence added his own justification for encouraging people to gather in unsafe crowds for a Trump rally: “The right to peacefully assemble is enshrined in the First Amendment of the Constitution.”

The info is here.

Tuesday, May 26, 2020

Four concepts to assess your personal risk as the U.S. reopens

Leana Wen
The Washington Post
Originally posted 21 May 20

Here is an excerpt:

So what does that mean in terms of choices each of us makes — what’s safe to do and what’s not?

Here are four concepts from other harm-reduction strategies that can help to guide our decisions:

Relative risk. Driving is an activity that carries risk, which can be reduced by following the speed limit and wearing a seat belt. For covid-19, we can think of risk through three key variables: proximity, activity and time.

The highest-risk scenario is if you are in close proximity with someone who is infected, in an indoor space, for an extended period of time. That’s why when one person in the household becomes ill, others are likely to get infected, too.

Also, certain activities, such as singing, expel more droplets; in one case, a single infected person in choir practice spread covid-19 to 52 people, two of whom died.

The same goes for gatherings where people hug one another — funerals and birthdays can be such “superspreader” events. Conversely, there are no documented cases of someone acquiring covid-19 by passing a stranger while walking outdoors.

You can decrease your risk by modifying one of these three variables. If you want to see friends, avoid crowded bars, and instead host in your backyard or a park, where everyone can keep their distance.

Use your own utensils and, to be even safer, bring your own food and drinks.

Skip the hugs, kisses and handshakes. If you go to the beach, find areas where you can stay at least six feet away from others who are not in your household. Takeout food is the safest. If you really want a meal out, eating outdoors with tables farther apart will be safer than dining in a crowded indoor restaurant.

Businesses should also heed this principle as they are reopening, by keeping up telecommuting and staggered shifts, reducing capacity in conference rooms, and closing communal dining areas. Museums can limit not only the number of people allowed in at once, but also the amount of time people are allowed to spend in each exhibit.

Pooled risk. If you engage in high-risk activity and are around others who do the same, you increase everyone’s risk. Think of the analogy with safe-sex practices: Those with multiple partners have higher risk than people in monogamous relationships. As applied to covid-19, this means those who have very low exposure are probably safe to associate with one another.

This principle is particularly relevant for separated families that want to see one another. I receive many questions from grandparents who miss their grandchildren and want to know when they can see them again. If two families have both been sheltering at home with virtually no outside interaction, there should be no concern with them being with one another. Families can come together for day care arrangements this way if all continue to abide by strict social distancing guidelines in other aspects of their lives. (The equation changes when any one individual resumes higher-risk activities — returning to work outside the home, for example.)

The info is here.

Saturday, April 4, 2020

Suicide attempt survivors’ recommendations for improving mental health treatment for attempt survivors.

Melanie A. Hom and others
Psychological Services. 
Advance online publication.
https://doi.org/10.1037/ser0000415

Abstract

Research indicates that connection to mental health care services and treatment engagement remain challenges among suicide attempt survivors. One way to improve suicide attempt survivors’ experiences with mental health care services is to elicit suggestions directly from attempt survivors regarding how to do so. This study aimed to identify and synthesize suicide attempt survivors’ recommendations for how to enhance mental health treatment experiences for attempt survivors. A sample of 329 suicide attempt survivors (81.5% female, 86.0% White/Caucasian, mean age = 35.07 ± 12.18 years) provided responses to an open-ended self-report survey question probing how treatment might be improved for suicide attempt survivors. Responses were analyzed utilizing both qualitative and quantitative techniques. Analyses identified four broad areas in which mental health treatment experiences might be improved for attempt survivors: (a) provider interactions (e.g., by reducing stigma of suicidality, expressing empathy, and using active listening), (b) intake and treatment planning (e.g., by providing a range of treatment options, including nonmedication treatments, and conducting a thorough assessment), (c) treatment delivery (e.g., by addressing root problems, bolstering coping skills, and using trauma-informed care), and (d) structural issues (e.g., by improving access to care and continuity of care). Findings highlight numerous avenues by which health providers might be able to facilitate more positive mental health treatment experiences for suicide attempt survivors. Research is needed to test whether implementing the recommendations offered by attempt survivors in this study might lead to enhanced treatment engagement, retention, and outcomes among suicide attempt survivors at large.

Here is an excerpt from the Discussion:

On this point, this study revealed numerous recommendations for how providers might be able to improve their interactions with attempt survivors. Suggestions in this domain aligned with prior studies on treatment experiences among suicide attempt survivors. For instance, recommendations that providers not stigmatize attempt survivors and, instead, empathize with them, actively listen to them, and humanize them, are consistent with aforementioned studies (Berglund et al., 2016; Frey et al., 2016; Shand et al., 2018; Sheehan et al., 2017; Taylor et al., 2009). This study’s findings regarding the importance of a collaborative therapeutic relationship are also consistent with previous work (Shand et al., 2018). Though each of these factors has been identified as salient to treatment engagement efforts broadly (see Barrett et al., 2008, for review), several suggestions that emerged in this study were more specific to attempt survivors. For example, ensuring that patients feel comfortable openly discussing suicidal thoughts and behaviors and taking disclosures of suicidality seriously are suggestions specifically applicable to the care of at-risk individuals. These recommendations not only support research indicating that asking about suicidality is not iatrogenic (see DeCou & Schumann, 2018, for review), but they also underscore the importance of considering the unique needs of attempt survivors. Indeed, given that most participants provided a recommendation in this area, the impact of provider-related factors should not be overlooked in the provision of care to this group.

Wednesday, December 11, 2019

When Assessing Novel Risks, Facts Are Not Enough

Baruch Fischoff
Scientific American
September 2019

Here is an excerpt:

To start off, we wanted to figure out how well the general public understands the risks they face in everyday life. We asked groups of laypeople to estimate the annual death toll from causes such as drowning, emphysema and homicide and then compared their estimates with scientific ones. Based on previous research, we expected that people would make generally accurate predictions but that they would overestimate deaths from causes that get splashy or frequent headlines—murders, tornadoes—and underestimate deaths from “quiet killers,” such as stroke and asthma, that do not make big news as often.

Overall, our predictions fared well. People overestimated highly reported causes of death and underestimated ones that received less attention. Images of terror attacks, for example, might explain why people who watch more television news worry more about terrorism than individuals who rarely watch. But one puzzling result emerged when we probed these beliefs. People who were strongly opposed to nuclear power believed that it had a very low annual death toll. Why, then, would they be against it? The apparent paradox made us wonder if by asking them to predict average annual death tolls, we had defined risk too narrowly. So, in a new set of questions we asked what risk really meant to people. When we did, we found that those opposed to nuclear power thought the technology had a greater potential to cause widespread catastrophes. That pattern held true for other technologies as well.

To find out whether knowing more about a technology changed this pattern, we asked technical experts the same questions. The experts generally agreed with laypeople about nuclear power's death toll for a typical year: low. But when they defined risk themselves, on a broader time frame, they saw less potential for problems. The general public, unlike the experts, emphasized what could happen in a very bad year. The public and the experts were talking past each other and focusing on different parts of reality.

The info is here.

Monday, November 18, 2019

Understanding behavioral ethics can strengthen your compliance program

Jeffrey Kaplan
The FCPA Blog
Originally posted October 21, 2019

Behavioral ethics is a well-known field of social science which shows how — due to various cognitive biases — “we are not as ethical as we think.” Behavioral compliance and ethics (which is less well known) attempts to use behavioral ethics insights to develop and maintain effective compliance programs. In this post I explore some of the ways that this can be done.

Behavioral C&E should be viewed on two levels. The first could be called specific behavioral C&E lessons, meaning enhancements to the various discrete C&E program elements — e.g., risk assessment, training — based on behavioral ethics insights.   Several of these are discussed below.

The second — and more general — aspect of behavioral C&E is the above-mentioned overarching finding that we are not as ethical as we think. The importance of this general lesson is based on the notion that the greatest challenges to having effective C&E programs in organizations is often more about the “will” than the “way.”

That is, what is lacking in many business organizations is an understanding that strong C&E is truly necessary. After all, if we are as ethical than we think, then effective risk mitigation would be just a matter of finding the right punishment for an offense and the power of logical thinking would do the rest. Behavioral ethics teaches that that assumption is ill-founded.

The info is here.

Thursday, November 14, 2019

Assessing risk, automating racism

Embedded ImageRuha Benjamin
Science  25 Oct 2019:
Vol. 366, Issue 6464, pp. 421-422

Here is an excerpt:

Practically speaking, their finding means that if two people have the same risk score that indicates they do not need to be enrolled in a “high-risk management program,” the health of the Black patient is likely much worse than that of their White counterpart. According to Obermeyer et al., if the predictive tool were recalibrated to actual needs on the basis of the number and severity of active chronic illnesses, then twice as many Black patients would be identified for intervention. Notably, the researchers went well beyond the algorithm developers by constructing a more fine-grained measure of health outcomes, by extracting and cleaning data from electronic health records to determine the severity, not just the number, of conditions. Crucially, they found that so long as the tool remains effective at predicting costs, the outputs will continue to be racially biased by design, even as they may not explicitly attempt to take race into account. For this reason, Obermeyer et al. engage the literature on “problem formulation,” which illustrates that depending on how one defines the problem to be solved—whether to lower health care costs or to increase access to care—the outcomes will vary considerably.

Wednesday, September 11, 2019

Assessment of Patient Nondisclosures to Clinicians of Experiencing Imminent Threats

Levy AG, Scherer AM, Zikmund-Fisher BJ, Larkin K, Barnes GD, Fagerlin A.
JAMA Netw Open. Published online August 14, 20192(8):e199277.
doi:10.1001/jamanetworkopen.2019.9277

Question 

How common is it for patients to withhold information from clinicians about imminent threats that they face (depression, suicidality, abuse, or sexual assault), and what are common reasons for nondisclosure?

Findings 

This survey study, incorporating 2 national, nonprobability, online surveys of a total of 4,510 US adults, found that at least one-quarter of participants who experienced each imminent threat reported withholding this information from their clinician. The most commonly endorsed reasons for nondisclosure included potential embarrassment, being judged, or difficult follow-up behavior.

Meaning

These findings suggest that concerns about potential negative repercussions may lead many patients who experience imminent threats to avoid disclosing this information to their clinician.

Conclusion

This study reveals an important concern about clinician-patient communication: if patients commonly withhold information from clinicians about significant threats that they face, then clinicians are unable to identify and attempt to mitigate these threats. Thus, these results highlight the continued need to develop effective interventions that improve the trust and communication between patients and their clinicians, particularly for sensitive, potentially life-threatening topics.

Monday, November 5, 2018

We Need To Examine The Ethics And Governance Of Artificial Intelligence

Nikita Malik
forbes.com
Originally posted October 4, 2018

Here is an excerpt:

The second concern is on regulation and ethics. Research teams at MIT and Harvard are already looking into the fast-developing area of AI to map the boundaries within which sensitive but important data can be used. Who determines whether this technology can save lives, for example, versus the very real risk of veering into an Orwellian dystopia?

Take artificial intelligence systems that have the ability to predicate a crime based on an individual’s history, and their propensity to do harm. Pennsylvania could be one of the first states in the United States to base criminal sentences not just on the crimes people are convicted of, but also on whether they are deemed likely to commit additional crimes in the future. Statistically derived risk assessments – based on factors such as age, criminal record, and employment, will help judges determine which sentences to give. This would help reduce the cost of, and burden on, the prison system.

Risk assessments – which have existed for a long time - have been used in other areas such as the prevention of terrorism and child sexual exploitation. In the latter category, existing human systems are so overburdened that children are often overlooked, at grave risk to themselves. Human errors in the case work of the severely abused child Gabriel Fernandez contributed to his eventual death at the hands of his parents, and a serious inquest into the shortcomings of the County Department of Children and Family Services in Los Angeles. Using artificial intelligence in vulnerability assessments of children could aid overworked caseworkers and administrators and flag errors in existing systems.

The info is here.

Friday, October 19, 2018

Risk Management Considerations When Treating Violent Patients

Kristen Lambert
Psychiatric News
Originally posted September 4, 2018

Here is an excerpt:

When a patient has a history of expressing homicidal ideation or has been violent previously, you should document, in every subsequent session, whether the patient admits or denies homicidal ideation. When the patient expresses homicidal ideation, document what he/she expressed and the steps you did or did not take in response and why. Should an incident occur, your documentation will play an important role in defending your actions.

Despite taking precautions, your patient may still commit a violent act. The following are some strategies that may minimize your risk.

  • Conduct complete timely/thorough risk assessments.
  • Document, including the reasons for taking and not taking certain actions.
  • Understand your state’s law on duty to warn. Be aware of the language in the law on whether you have a mandatory, permissive, or no duty to warn/protect.
  • Understand your state’s laws regarding civil commitment.
  • Understand your state’s laws regarding disclosure of confidential information and when you can do so.
  • Understand your state’s laws regarding discussing firearms ownership and/or possession with patients.
  • If you have questions, consult an attorney or risk management professional.

Friday, July 27, 2018

Morality in the Machines

Erick Trickery
Harvard Law Bulletin
Originally posted June 26, 2018

Here is an excerpt:

In February, the Harvard and MIT researchers endorsed a revised approach in the Massachusetts House’s criminal justice bill, which calls for a bail commission to study risk-assessment tools. In late March, the House-Senate conference committee included the more cautious approach in its reconciled criminal justice bill, which passed both houses and was signed into law by Gov. Charlie Baker in April.

Meanwhile, Harvard and MIT scholars are going still deeper into the issue. Bavitz and a team of Berkman Klein researchers are developing a database of governments that use risk scores to help set bail. It will be searchable to see whether court cases have challenged a risk-score tool’s use, whether that tool is based on peer-reviewed scientific literature, and whether its formulas are public.

Many risk-score tools are created by private companies that keep their algorithms secret. That lack of transparency creates due-process concerns, says Bavitz. “Flash forward to a world where a judge says, ‘The computer tells me you’re a risk score of 5 out of 7.’ What does it mean? I don’t know. There’s no opportunity for me to lift up the hood of the algorithm.” Instead, he suggests governments could design their own risk-assessment algorithms and software, using staff or by collaborating with foundations or researchers.

Students in the ethics class agreed that risk-score programs shouldn’t be used in court if their formulas aren’t transparent, according to then HLS 3L Arjun Adusumilli. “When people’s liberty interests are at stake, we really expect a certain amount of input, feedback and appealability,” he says. “Even if the thing is statistically great, and makes good decisions, we want reasons.”

The information is here.

Thursday, February 15, 2018

Engineers, philosophers and sociologists release ethical design guidelines for future technology

Rafael A Calvo and Dorian Peters
The Conversation
Originally posted December 12, 2017

Here is an excerpt:

The big questions posed by our digital future sit at the intersection of technology and ethics. This is complex territory that requires input from experts in many different fields if we are to navigate it successfully.

To prepare the report, economists and sociologists researched the effect of technology on disempowered groups. Lawyers considered the future of privacy and justice. Doctors and psychologists examined impacts on physical and mental health. Philosophers unpacked hidden biases and moral questions.

The report suggests all technologies should be guided by five general principles:

  • protecting human rights
  • prioritising and employing established metrics for measuring wellbeing
  • ensuring designers and operators of new technologies are accountable
  • making processes transparent
  • minimizing the risks of misuse.

Sticky questions

The report runs the spectrum from practical to more abstract concerns, touching on personal data ownership, autonomous weapons, job displacement and questions like “can decisions made by amoral systems have moral consequences?”

One section deals with a “lack of ownership or responsibility from the tech community”. It points to a divide between how the technology community sees its ethical responsibilities and the broader social concerns raised by public, legal, and professional communities.

The article is here.

Tuesday, February 13, 2018

How Should Physicians Make Decisions about Mandatory Reporting When a Patient Might Become Violent?

Amy Barnhorst, Garen Wintemute, and Marian Betz
AMA Journal of Ethics. January 2018, Volume 20, Number 1: 29-35.

Abstract

Mandatory reporting of persons believed to be at imminent risk for committing violence or attempting suicide can pose an ethical dilemma for physicians, who might find themselves struggling to balance various conflicting interests. Legal statutes dictate general scenarios that require mandatory reporting to supersede confidentiality requirements, but physicians must use clinical judgment to determine whether and when a particular case meets the requirement. In situations in which it is not clear whether reporting is legally required, the situation should be analyzed for its benefit to the patient and to public safety. Access to firearms can complicate these situations, as firearms are a well-established risk factor for violence and suicide yet also a sensitive topic about which physicians and patients might have strong personal beliefs.

The commentary is here.

Monday, June 5, 2017

AI May Hold the Key to Stopping Suicide

Bahar Gholipour
NBC News
Originally posted May 23, 2017

Here is an excerpt:

So far the results are promising. Using AI, Ribeiro and her colleagues were able to predict whether someone would attempt suicide within the next two years at about 80 percent accuracy, and within the next week at 92 percent accuracy. Their findings were recently reported in the journal Clinical Psychological Science.

This high level of accuracy was possible because of machine learning, as researchers trained an algorithm by feeding it anonymous health records from 3,200 people who had attempted suicide. The algorithm learns patterns through examining combinations of factors that lead to suicide, from medication use to the number of ER visits over many years. Bizarre factors may pop up as related to suicide, such as acetaminophen use a year prior to an attempt, but that doesn't mean taking acetaminophen can be isolated as a risk factor for suicide.

"As humans, we want to understand what to look for," Ribeiro says. "But this is like asking what's the most important brush stroke in a painting."

With funding from the Department of Defense, Ribeiro aims to create a tool that can be used in clinics and emergency rooms to better find and help high-risk individuals.

The article is here.

Thursday, April 6, 2017

How to Upgrade Judges with Machine Learning

by Tom Simonite
MIT Press
Originally posted March 6, 2017

Here is an excerpt:

The algorithm assigns defendants a risk score based on data pulled from records for their current case and their rap sheet, for example the offense they are suspected of, when and where they were arrested, and numbers and type of prior convictions. (The only demographic data it uses is age—not race.)

Kleinberg suggests that algorithms could be deployed to help judges without major disruption to the way they currently work in the form of a warning system that flags decisions highly likely to be wrong. Analysis of judges’ performance suggested they have a tendency to occasionally release people who are very likely to fail to show in court, or to commit crime while awaiting trial. An algorithm could catch many of those cases, says Kleinberg.

Richard Berk, a professor of criminology at the University of Pennsylvania, describes the study as “very good work,” and an example of a recent acceleration of interest in applying machine learning to improve criminal justice decisions. The idea has been explored for 20 years, but machine learning has become more powerful, and data to train it more available.

Berk recently tested a system with the Pennsylvania State Parole Board that advises on the risk a person will reoffend, and found evidence it reduced crime. The NBER study is important because it looks at how machine learning can be used pre-sentencing, an area that hasn’t been thoroughly explored, he says.

The article is here.

Editor's Note: I often wonder how much time until machine learning is applied to psychotherapy.

Sunday, February 12, 2017

Expert Witness Testimony in Civil Commitment Hearings for Sexually Dangerous Individuals

Jennifer E. Alleyne, Kaustubh G. Joshi and Marie E. Gehle
Journal of the American Academy of Psychiatry and the Law Online June 2016, 44 (2) 265-267

Here is the Discussion Section:

Sexually dangerous individual (or sexually violent predator) laws across the country follow a general scheme. The individual has been convicted of certain sexual offenses and has a mental abnormality or personality disorder that makes him likely to commit similar crimes in the future. Whether decided by a judge or jury, the result is frequently the indefinite commitment of the person. Because the questions at hand are generally outside the expertise of the trier of fact, the testimony of qualified expert witnesses is crucial. Therefore, the admissibility and credibility of mental health testimony are often heavily scrutinized during the proceedings.

Mr. Loy sought to find Dr. Sullivan's and Dr. Volk's testimonies inadmissible on different grounds. Having a license on probation, giving testimony that creates an alleged bias, or, for example, routinely testifying for one side versus the other does not automatically render the witness unqualified or the testimony inadmissible. In most jurisdictions, the case law and statutes governing the admission of expert witness testimony allow for its use if the witness has some degree of expertise in the field in which he will testify and if the testimony helps the trier of fact to understand the evidence or determine a fact at issue.

Inherent in the civil commitment of sexual offenders are complex concerns regarding psychiatric diagnoses, risk assessment, and volitional impairment. The trier of fact depends on expert testimony to understand and decide these questions. If the expert has a skeleton in the closet, has an imperfection in his qualifications, or holds an alleged bias, the trier of fact should appropriately weigh the credibility of that testimony when rendering a decision. Such testimony is not automatically inadmissible. A court's discretion in admitting expert witness testimony will not be reversed unless the district court abuses its discretion in admitting expert testimony. Finally, in most jurisdictions, the court's assessment of witness credibility is granted deference.

The article is here.

Monday, December 12, 2016

Preventing Conflicts of Interest of NFL Team Physicians

Mark A. Rothstein
The Hastings Center Report
Originally posted November 21, 2016

Abstract

At least since the time of Hippocrates, the physician-patient relationship has been the paradigmatic ethical arrangement for the provision of medical care. Yet, a physician-patient relationship does not exist in every professional interaction involving physicians and individuals they examine or treat. There are several “third-party” relationships, mostly arising where the individual is not a patient and is merely being examined rather than treated, the individual does not select or pay the physician, and the physician's services are provided for the benefit of another party. Physicians who treat NFL players have a physician-patient relationship, but physicians who merely examine players to determine their health status have a third-party relationship. As described by Glenn Cohen et al., the problem is that typical NFL team doctors perform both functions, which leads to entrenched conflicts of interest. Although there are often disputes about treatment, the main point of contention between players and team physicians is the evaluation of injuries and the reporting of players’ health status to coaches and other team personnel. Cohen et al. present several thoughtful recommendations that deserve serious consideration. Rather than focusing on their specific recommendations, however, I would like to explain the rationale for two essential reform principles: the need to sever the responsibilities of treatment and evaluation by team physicians and the need to limit the amount of player medical information disclosed to teams.

Friday, November 11, 2016

The map is not the territory: medical records and 21st century practice

Stephen A Martin & Christine A Sinsky
The Lancet
Published: 25 April 2016

Summary

Documentation of care is at risk of overtaking the delivery of care in terms of time, clinician focus, and perceived importance. The medical record as currently used for documentation contributes to increased cognitive workload, strained clinician–patient relationships, and burnout. We posit that a near verbatim transcript of the clinical encounter is neither feasible nor desirable, and that attempts to produce this exact recording are harmful to patients, clinicians, and the health system. In this Viewpoint, we focus on the alternative constructions of the medical record to bring them back to their primary purpose—to aid cognition, communicate, create a succinct account of care, and support longitudinal comprehensive care—thereby to support the building of relationships and medical decision making while decreasing workload.

Here are two excerpts:

While our vantage point is American, documentation guidelines are part of a global tapestry of what has been termed technogovernance, a bureaucratic model in which professionals' behaviour is shaped and manipulated by tight regulatory policies.

(cut)

In 1931, the scientist Alfred Korzybski introduced the phrase "the map is not the territory", to suggest that the representation of reality is not reality itself. In health care, creating the map (ie, the clinical record) can take on more importance and consume more resources than providing care itself. Indeed, more time may be spent documenting care than delivering care. In addition, fee-for-service payment arrangements pay for the map (the medical note), not the territory (the actual care). Readers of contemporary electronic notes, composed generously of auto-text output, copy forward text, and boiler plate statements for compliance, billing, and performance measurement understand all too well the gap between the map and the territory, and more profoundly, between what is done to patients in service of creating the map and what patients actually need.

Contemporary medical records are used for purposes that extend beyond supporting patient and caregiver. Records are used in quality evaluations, practitioner monitoring, practice certifications, billing justification, audit defence, disability determinations, health insurance risk assessments, legal actions, and research.