Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Risk Assessment. Show all posts
Showing posts with label Risk Assessment. Show all posts

Sunday, July 27, 2025

Meta-analysis of risk factors for suicide after psychiatric discharge and meta-regression of the duration of follow-up

Tai, A., Pincham, H., Basu, A., & Large, M. (2025).
The Australian and New Zealand journal of psychiatry,
48674251348372. Advance online publication.

Abstract

Background: Rates of suicide following discharge from psychiatric hospitals are extraordinarily high in the first week post-discharge and then decline steeply over time. The aim of this meta-analysis is to evaluate the strength of risk factors for suicide after psychiatric discharge and to investigate the association between the strength of risk factors and duration of study follow-up.

Methods: A PROSPERO-registered meta-analysis of observational studies was performed in accordance with PRISMA guidelines. Post-discharge suicide risk factors reported five or more times were synthesised using a random-effects model. Mixed-effects meta-regression was used to examine whether the strength of suicide risk factors could be explained by duration of study follow-up.

Results: Searches located 83 primary studies. From this, 63 risk estimates were meta-analysed. The strongest risk factors were previous self-harm (odds ratio = 2.75, 95% confidence interval = [2.37, 3.19]), suicidal ideation (odds ratio = 2.15, 95% confidence interval = [1.73, 2.68]), depressive symptoms (odds ratio = 1.84, 95% confidence interval = [1.48, 2.30]), and high-risk categorisation (odds ratio = 7.65, 95% confidence interval = [5.48, 10.67]). Significantly protective factors included age ⩽30, age ⩾65, post-traumatic stress disorder, and dementia. The effect sizes for the strongest post-discharge suicide risk factors did not decline over longer periods of follow-up.

Conclusion: The effect sizes of post-discharge suicide risk factors were generally modest, suggesting that clinical risk factors may have limited value in distinguishing between high-risk and low-risk groups. The highly elevated rates of suicide immediately after discharge and their subsequent decline remain unexplained.

Tuesday, March 11, 2025

Moral Challenges for Psychologists Working in Psychology and Law

Allan A. (2018).
Psychiatry, psychology, and law:
an interdisciplinary journal of the Australian and 
New Zealand Association of Psychiatry,
Psychology and Law, 25(3), 485–499.

Abstract

States have an obligation to protect themselves and their citizens from harm, and they use the coercive powers of law to investigate threats, enforce rules and arbitrate disputes, thereby impacting on people's well-being and legal rights and privileges. Psychologists as a collective have a responsibility to use their abilities, knowledge, skill and experience to enhance law's effectiveness, efficiency, and reliability in preventing harm, but their professional behaviour in this collaboration must be moral. They could, however, find their personal values to be inappropriate or there to be insufficient moral guides and could find it difficult to obtain definitive moral guidance from law. The profession's ethical principles do, however, provide well-articulated, generally accepted and profession-appropriate guidance, but practitioners might encounter moral issues that can only be solved by the profession as a whole or society.

Here are some thoughts:

While psychologists play a crucial role in assisting the law to protect society through assessments, risk evaluations, and expert opinions, their work often intersects with coercive practices that can impact individual rights and well-being.  Psychologists must navigate the tension between societal protection and respect for human dignity, especially when involved in involuntary detention, forensic interviews, and risk assessments.  They are guided by core ethical principles such as non-maleficence, justice, fidelity, and respect, but these principles can conflict, requiring careful ethical decision-making.  Challenges are particularly pronounced in areas like risk assessment, where tools may be flawed or culturally biased, and where psychologists might face pressure to align with legal expectations, potentially compromising their objectivity and professional integrity.

The article emphasizes the need for psychologists in legal settings to maintain public trust, uphold human rights principles, and utilize structured, evidence-based, and culturally sensitive methods in their practice.  Beyond individual ethical conduct, psychologists have a responsibility to advocate for systemic improvements, including better assessment tools for diverse populations and robust ethical guidelines. Ultimately, the article underscores that psychologists in law must continually engage in moral reflection, striving for a just and effective legal system while minimizing harm and ensuring their practice remains ethically sound and socially responsible, guided by both professional ethics and universal human rights frameworks.

Saturday, December 14, 2024

Suicides in the US military increased in 2023, continuing a long-term trend

Lolita C. Baldor
Associated Press
Originally posted 14 Nov 24

Suicides in the U.S. military increased in 2023, continuing a long-term trend that the Pentagon has struggled to abate, according to a Defense Department report released on Thursday. The increase is a bit of a setback after the deaths dipped slightly the previous year.

The number of suicides and the rate per 100,000 active-duty service members went up, but that the rise was not statistically significant. The number also went up among members of the Reserves, while it decreased a bit for the National Guard.

Defense Secretary Lloyd Austin has declared the issue a priority, and top leaders in the Defense Department and across the services have worked to develop programs both to increase mental health assistance for troops and bolster education on gun safety, locks and storage. Many of the programs, however, have not been fully implemented yet, and the moves fall short of more drastic gun safety measures recommended by an independent commission.


Here are some thoughts:

The report from the Associated Press focuses on the rise in suicide rates among U.S. military personnel in 2023. Despite efforts by the Pentagon to reduce these numbers, the suicide rate increased, although the rise was not statistically significant. This follows a trend of increasing suicides among active-duty members since 2011.

The article highlights the ongoing efforts to address the problem, including increasing access to mental health care and promoting gun safety measures, but also points to an independent commission's recommendation for more drastic gun safety regulations that have not yet been implemented. The article concludes with the overall trend of suicide rates in the military and among family members of service members, as well as information on how to access mental health support through the 988 Lifeline.

Tuesday, November 5, 2024

Women are increasingly using firearms in suicide deaths, CDC data reveals

Eduardo Cuevas
USA Today
Originally posted 26 SEPT 24

More women in the U.S. are using firearms in suicide deaths, a new federal report says.

Firearms were used in more than half the country’s record 49,500 suicide deaths in 2022, Centers for Disease Control and Prevention data shows. Traditionally, men die by suicide at a much higher rate than women, and they often do so using guns. The CDC report published Thursday, however, found firearms were the leading means of suicide for women since 2020, and suicide deaths overall among women also increased.

Firearms have been the primary means for most suicide deaths in the U.S. Guns stored in homes, especially those not stored securely, are linked to higher levels of suicide.

Increased use of firearms by women corresponds to a greater risk of suicide, Rebecca Bernert, founder of the Stanford Suicide Prevention Research Laboratory, said in an email.

For this reason, it's important to teach gun owners about safe storage to prevent people from having immediate access to a loaded weapon, said Bernert, who is also a Stanford Medicine professor. Restricting access to “lethal means," she said, is among "the most potent suicide prevention strategies that exist worldwide."

The problem, Bernert said, is such restrictions tend to be "vastly underutilized and poorly understood as a public health strategy.”


Here are some thoughts:

Recent data from the Centers for Disease Control and Prevention (CDC) reveals a concerning trend in suicide deaths among women in the United States. In 2022, firearms were used in over half of the country's record 49,500 suicide deaths.

While men traditionally have higher suicide rates and more frequently use firearms, the CDC report indicates that since 2020, firearms have become the leading means of suicide for women as well. This shift corresponds with an overall increase in suicide deaths among women. Experts attribute this trend to various factors, including increased gun ownership among women, particularly during the COVID-19 pandemic, which also exacerbated stress and isolation.

The accessibility of firearms in homes, especially when not stored securely, is linked to higher suicide risks. Suicide prevention specialists emphasize the importance of safe gun storage and restricting access to lethal means as crucial strategies.

The report highlights the need for a comprehensive approach to suicide prevention, including addressing social connections, mental health support, and awareness of crisis resources. While suicide rates have been rising across demographics, the increasing use of firearms by women in suicide attempts is a particularly alarming development that requires urgent attention and targeted interventions.

Saturday, September 14, 2024

Should psychotherapists conduct visual assessments of nonsuicidal self-injury wounds?

Westers, N. J. (2024).
Psychotherapy, 61(3), 250–258.

Abstract

Beneficence and nonmaleficence are key ethical principles toward which psychotherapists consistently strive. When patients engage in nonsuicidal self-injury (NSSI) during the course of psychotherapy, therapists may feel responsible for visually assessing the severity of the NSSI wound in order to benefit their patients and keep them from harm. However, there are no guidelines for conducting these visual assessments, and there is no research exploring their effects on patients. This article considers the ethical implications of visually examining NSSI wounds; discusses psychotherapist scope of practice and competence; draws attention to relevant ethical standards; underscores risk management, liability, and standard of care; and addresses the risk of suicide or accidental death resulting from NSSI. It also provides ethical guidance for conducting effective verbal assessments of NSSI wounds and offers suggestions for navigating complex clinical situations, such as when patients routinely and spontaneously show their therapists their wounds and how psychotherapists should handle assessments and interventions related to NSSI scars. It ends with implications for training and therapeutic practice.

Impact Statement

Question: How should psychotherapists navigate assessment of nonsuicidal self-injury (NSSI) wounds, and how does this inform their work with auxiliary treatment team members such as medical professionals and parents?

Findings: This article discusses the scope of practice of psychology, individual psychotherapist competence, and risk management to critically evaluate if and how therapists should assess NSSI wounds. 

Meaning: There may be times when briefly looking at NSSI wounds is appropriate in psychotherapy, but visually assessing NSSI wounds is not within the scope of practice of psychology and may not protect patients from harm.

Next Steps: Research should examine with patients if and how their psychotherapist conducted visual assessments of their NSSI wounds, by whom these assessments were initiated, how they affected the patient experience, and if they resulted in help or harm.

The article is paywalled.

Here are some thoughts:

Current research lacks data on the impact of visually or verbally assessing NSSI wounds on patients. This article argues that visual assessment of NSSI wounds is outside the scope of practice for psychologists and can be potentially harmful. Therefore, psychologists need to be aware of interpersonal boundaries, clinical literature, and ethical standards. Instead, verbal assessment is recommended as best practice. Effective verbal assessment techniques include open-ended questions about wound care, pain, and medical attention, while maintaining a respectful and curious demeanor. Therapists should prioritize patient safety and refer patients to medical professionals when necessary. Ultimately, a balance between patient care and ethical boundaries should guide clinical practice.

Wednesday, July 5, 2023

Taxonomy of Risks posed by Language Models

Weidinger, L., Uesato, J., et al. (2022, March).
In Proceedings of the 2022 ACM Conference on 
Fairness, Accountability, and Transparency
(pp. 19-30).
Association for Computing Machinery.

Abstract

Responsible innovation on large-scale Language Models (LMs) requires foresight into and in-depth understanding of the risks these models may pose. This paper develops a comprehensive taxonomy of ethical and social risks associated with LMs. We identify twenty-one risks, drawing on expertise and literature from computer science, linguistics, and the social sciences. We situate these risks in our taxonomy of six risk areas: I. Discrimination, Hate speech and Exclusion, II. Information Hazards, III. Misinformation Harms, IV. Malicious Uses, V. Human-Computer Interaction Harms, and VI. Environmental and Socioeconomic harms. For risks that have already been observed in LMs, the causal mechanism leading to harm, evidence of the risk, and approaches to risk mitigation are discussed. We further describe and analyse risks that have not yet been observed but are anticipated based on assessments of other language technologies, and situate these in the same taxonomy. We underscore that it is the responsibility of organizations to engage with the mitigations we discuss throughout the paper. We close by highlighting challenges and directions for further research on risk evaluation and mitigation with the goal of ensuring that language models are developed responsibly.

Conclusion

In this paper, we propose a comprehensive taxonomy to structure the landscape of potential ethical and social risks associated with large-scale language models (LMs). We aim to support the research programme toward responsible innovation on LMs, broaden the public discourse on ethical and social risks related to LMs, and break risks from LMs into smaller, actionable pieces to facilitate their mitigation. More expertise and perspectives will be required to continue to build out this taxonomy of potential risks from LMs. Future research may also expand this taxonomy by applying additional methods such as case studies or interviews. Next steps building on this work will be to engage further perspectives, to innovate on analysis and evaluation methods, and to build out mitigation tools, working toward the responsible innovation of LMs.


Here is a summary of each of the six categories of risks:
  • Discrimination: LLMs can be biased against certain groups of people, leading to discrimination in areas such as employment, housing, and lending.
  • Hate speech and exclusion: LLMs can be used to generate hate speech and other harmful content, which can lead to exclusion and violence.
  • Information hazards: LLMs can be used to spread misinformation, which can have a negative impact on society.
  • Misinformation harms: LLMs can be used to create deepfakes and other forms of synthetic media, which can be used to deceive people.
  • Malicious uses: LLMs can be used to carry out malicious activities such as hacking, fraud, and terrorism.
  • Human-computer interaction harms: LLMs can be used to create addictive and harmful applications, which can have a negative impact on people's mental health.
  • Environmental and socioeconomic harms: LLMs can be used to consume large amounts of energy and data, which can have a negative impact on the environment and society.

Wednesday, September 16, 2020

There are no good choices

Ezra Klein
vox.com
Originally published 14 Sept 20

Here is an excerpt:

In America, our ideological conflicts are often understood as the tension between individual freedoms and collective actions. The failure of our pandemic response policy exposes the falseness of that frame. In the absence of effective state action, we, as individuals, find ourselves in prisons of risk, our every movement stalked by disease. We are anything but free; our only liberty is to choose among a menu of awful options. And faced with terrible choices, we are turning on each other, polarizing against one another. YouTube conspiracies and social media shaming are becoming our salves, the way we wrest a modicum of individual control over a crisis that has overwhelmed us as a collective.

“The burden of decision-making and risk in this pandemic has been fully transitioned from the top down to the individual,” says Dr. Julia Marcus, a Harvard epidemiologist. “It started with [responsibility] being transitioned to the states, which then transitioned it to the local school districts — If we’re talking about schools for the moment — and then down to the individual. You can see it in the way that people talk about personal responsibility, and the way that we see so much shaming about individual-level behavior.”

But in shifting so much responsibility to individuals, our government has revealed the limits of individualism.

The risk calculation that rules, and ruins, lives

Think of coronavirus risk like an equation. Here’s a rough version of it: The danger of an act = (the transmission risk of the activity) x (the local prevalence of Covid-19) / (by your area’s ability to control a new outbreak).

Individuals can control only a small portion of that equation. People can choose safer activities over riskier ones — though the language of choice too often obscures the reality that many have no economic choice save to work jobs that put them, and their families, in danger. But the local prevalence of Covid-19 and the capacity of authorities to track and squelch outbreaks are collective functions.

The info is here.

Thursday, August 13, 2020

Every Decision Is A Risk. Every Risk Is A Decision.

Maggie Koerth
fivethirtyeight.com
Originally posted 21 July 20

Here is an excerpt:

In general, research has shown that indoors is riskier than outside, long visits riskier than short ones, crowds riskier than individuals — and, look, just avoid situations where you’re being sneezed, yelled, coughed or sung at.

But the trouble with the muddy middle is that a general idea of what is riskier isn’t the same thing as a clear delineation between right and wrong. These charts — even the best ones — aren’t absolute arbiters of safety: They’re the result of surveying experts. In the case of Popescu’s chart, the risk categorizations were assigned based on discussions among herself, Emanuel and Dr. James P. Phillips, the chief of disaster medicine at George Washington University Emergency Medicine. They each independently assigned a risk level to each activity, and then hashed out the ones on which they disagreed.

Take golf. How safe is it to go out to the links? Initially, the three experts had different risk levels assigned to this activity because they were all making different assumptions about what a game of golf naturally involved, Popescu said. “Are people doing it alone? If not, how many people are in a cart? Are they wearing masks? Are they drinking? …. those little variables that can increase the risk,” she told me.

Golf isn’t just golf. It’s how you golf that matters.

Those variables and assumptions aren’t trivial to calculating risk. Nor are they static. There’s different muck under your boggy feet in different parts of the country, at different times. For instance, how safe is it to eat outdoors with friends? Popescu’s chart ranks “outdoor picnic or porch dining” with people outside your household as low risk — a very validating categorization, personally. But a chart produced by the Texas Medical Association, based on a survey of its 53,000 physician members, rates “attending a backyard barbeque” as a moderate risk, a 5 on a scale in which 9 is the stuff most of us have no problem eschewing.

The info is here.

Monday, August 3, 2020

The Role of Cognitive Dissonance in the Pandemic

Elliot Aronson and Carol Tavris
The Atlantic
Originally published 12 July 20

Here is an excerpt:

Because of the intense polarization in our country, a great many Americans now see the life-and-death decisions of the coronavirus as political choices rather than medical ones. In the absence of a unifying narrative and competent national leadership, Americans have to choose whom to believe as they make decisions about how to live: the scientists and the public-health experts, whose advice will necessarily change as they learn more about the virus, treatment, and risks? Or President Donald Trump and his acolytes, who suggest that masks and social distancing are unnecessary or “optional”?

The cognition I want to go back to work or I want to go to my favorite bar to hang out with my friends is dissonant with any information that suggests these actions might be dangerous—if not to individuals themselves, then to others with whom they interact.

How to resolve this dissonance? People could avoid the crowds, parties, and bars and wear a mask. Or they could jump back into their former ways. But to preserve their belief that they are smart and competent and would never do anything foolish to risk their lives, they will need some self-justifications: Claim that masks impair their breathing, deny that the pandemic is serious, or protest that their “freedom” to do what they want is paramount. “You’re removing our freedoms and stomping on our constitutional rights by these Communist-dictatorship orders,” a woman at a Palm Beach County commissioners’ hearing said. “Masks are literally killing people,” said another. South Dakota Governor Kristi Noem, referring to masks and any other government interventions, said, “More freedom, not more government, is the answer.” Vice President Mike Pence added his own justification for encouraging people to gather in unsafe crowds for a Trump rally: “The right to peacefully assemble is enshrined in the First Amendment of the Constitution.”

The info is here.

Tuesday, May 26, 2020

Four concepts to assess your personal risk as the U.S. reopens

Leana Wen
The Washington Post
Originally posted 21 May 20

Here is an excerpt:

So what does that mean in terms of choices each of us makes — what’s safe to do and what’s not?

Here are four concepts from other harm-reduction strategies that can help to guide our decisions:

Relative risk. Driving is an activity that carries risk, which can be reduced by following the speed limit and wearing a seat belt. For covid-19, we can think of risk through three key variables: proximity, activity and time.

The highest-risk scenario is if you are in close proximity with someone who is infected, in an indoor space, for an extended period of time. That’s why when one person in the household becomes ill, others are likely to get infected, too.

Also, certain activities, such as singing, expel more droplets; in one case, a single infected person in choir practice spread covid-19 to 52 people, two of whom died.

The same goes for gatherings where people hug one another — funerals and birthdays can be such “superspreader” events. Conversely, there are no documented cases of someone acquiring covid-19 by passing a stranger while walking outdoors.

You can decrease your risk by modifying one of these three variables. If you want to see friends, avoid crowded bars, and instead host in your backyard or a park, where everyone can keep their distance.

Use your own utensils and, to be even safer, bring your own food and drinks.

Skip the hugs, kisses and handshakes. If you go to the beach, find areas where you can stay at least six feet away from others who are not in your household. Takeout food is the safest. If you really want a meal out, eating outdoors with tables farther apart will be safer than dining in a crowded indoor restaurant.

Businesses should also heed this principle as they are reopening, by keeping up telecommuting and staggered shifts, reducing capacity in conference rooms, and closing communal dining areas. Museums can limit not only the number of people allowed in at once, but also the amount of time people are allowed to spend in each exhibit.

Pooled risk. If you engage in high-risk activity and are around others who do the same, you increase everyone’s risk. Think of the analogy with safe-sex practices: Those with multiple partners have higher risk than people in monogamous relationships. As applied to covid-19, this means those who have very low exposure are probably safe to associate with one another.

This principle is particularly relevant for separated families that want to see one another. I receive many questions from grandparents who miss their grandchildren and want to know when they can see them again. If two families have both been sheltering at home with virtually no outside interaction, there should be no concern with them being with one another. Families can come together for day care arrangements this way if all continue to abide by strict social distancing guidelines in other aspects of their lives. (The equation changes when any one individual resumes higher-risk activities — returning to work outside the home, for example.)

The info is here.

Saturday, April 4, 2020

Suicide attempt survivors’ recommendations for improving mental health treatment for attempt survivors.

Melanie A. Hom and others
Psychological Services. 
Advance online publication.
https://doi.org/10.1037/ser0000415

Abstract

Research indicates that connection to mental health care services and treatment engagement remain challenges among suicide attempt survivors. One way to improve suicide attempt survivors’ experiences with mental health care services is to elicit suggestions directly from attempt survivors regarding how to do so. This study aimed to identify and synthesize suicide attempt survivors’ recommendations for how to enhance mental health treatment experiences for attempt survivors. A sample of 329 suicide attempt survivors (81.5% female, 86.0% White/Caucasian, mean age = 35.07 ± 12.18 years) provided responses to an open-ended self-report survey question probing how treatment might be improved for suicide attempt survivors. Responses were analyzed utilizing both qualitative and quantitative techniques. Analyses identified four broad areas in which mental health treatment experiences might be improved for attempt survivors: (a) provider interactions (e.g., by reducing stigma of suicidality, expressing empathy, and using active listening), (b) intake and treatment planning (e.g., by providing a range of treatment options, including nonmedication treatments, and conducting a thorough assessment), (c) treatment delivery (e.g., by addressing root problems, bolstering coping skills, and using trauma-informed care), and (d) structural issues (e.g., by improving access to care and continuity of care). Findings highlight numerous avenues by which health providers might be able to facilitate more positive mental health treatment experiences for suicide attempt survivors. Research is needed to test whether implementing the recommendations offered by attempt survivors in this study might lead to enhanced treatment engagement, retention, and outcomes among suicide attempt survivors at large.

Here is an excerpt from the Discussion:

On this point, this study revealed numerous recommendations for how providers might be able to improve their interactions with attempt survivors. Suggestions in this domain aligned with prior studies on treatment experiences among suicide attempt survivors. For instance, recommendations that providers not stigmatize attempt survivors and, instead, empathize with them, actively listen to them, and humanize them, are consistent with aforementioned studies (Berglund et al., 2016; Frey et al., 2016; Shand et al., 2018; Sheehan et al., 2017; Taylor et al., 2009). This study’s findings regarding the importance of a collaborative therapeutic relationship are also consistent with previous work (Shand et al., 2018). Though each of these factors has been identified as salient to treatment engagement efforts broadly (see Barrett et al., 2008, for review), several suggestions that emerged in this study were more specific to attempt survivors. For example, ensuring that patients feel comfortable openly discussing suicidal thoughts and behaviors and taking disclosures of suicidality seriously are suggestions specifically applicable to the care of at-risk individuals. These recommendations not only support research indicating that asking about suicidality is not iatrogenic (see DeCou & Schumann, 2018, for review), but they also underscore the importance of considering the unique needs of attempt survivors. Indeed, given that most participants provided a recommendation in this area, the impact of provider-related factors should not be overlooked in the provision of care to this group.

Wednesday, December 11, 2019

When Assessing Novel Risks, Facts Are Not Enough

Baruch Fischoff
Scientific American
September 2019

Here is an excerpt:

To start off, we wanted to figure out how well the general public understands the risks they face in everyday life. We asked groups of laypeople to estimate the annual death toll from causes such as drowning, emphysema and homicide and then compared their estimates with scientific ones. Based on previous research, we expected that people would make generally accurate predictions but that they would overestimate deaths from causes that get splashy or frequent headlines—murders, tornadoes—and underestimate deaths from “quiet killers,” such as stroke and asthma, that do not make big news as often.

Overall, our predictions fared well. People overestimated highly reported causes of death and underestimated ones that received less attention. Images of terror attacks, for example, might explain why people who watch more television news worry more about terrorism than individuals who rarely watch. But one puzzling result emerged when we probed these beliefs. People who were strongly opposed to nuclear power believed that it had a very low annual death toll. Why, then, would they be against it? The apparent paradox made us wonder if by asking them to predict average annual death tolls, we had defined risk too narrowly. So, in a new set of questions we asked what risk really meant to people. When we did, we found that those opposed to nuclear power thought the technology had a greater potential to cause widespread catastrophes. That pattern held true for other technologies as well.

To find out whether knowing more about a technology changed this pattern, we asked technical experts the same questions. The experts generally agreed with laypeople about nuclear power's death toll for a typical year: low. But when they defined risk themselves, on a broader time frame, they saw less potential for problems. The general public, unlike the experts, emphasized what could happen in a very bad year. The public and the experts were talking past each other and focusing on different parts of reality.

The info is here.

Monday, November 18, 2019

Understanding behavioral ethics can strengthen your compliance program

Jeffrey Kaplan
The FCPA Blog
Originally posted October 21, 2019

Behavioral ethics is a well-known field of social science which shows how — due to various cognitive biases — “we are not as ethical as we think.” Behavioral compliance and ethics (which is less well known) attempts to use behavioral ethics insights to develop and maintain effective compliance programs. In this post I explore some of the ways that this can be done.

Behavioral C&E should be viewed on two levels. The first could be called specific behavioral C&E lessons, meaning enhancements to the various discrete C&E program elements — e.g., risk assessment, training — based on behavioral ethics insights.   Several of these are discussed below.

The second — and more general — aspect of behavioral C&E is the above-mentioned overarching finding that we are not as ethical as we think. The importance of this general lesson is based on the notion that the greatest challenges to having effective C&E programs in organizations is often more about the “will” than the “way.”

That is, what is lacking in many business organizations is an understanding that strong C&E is truly necessary. After all, if we are as ethical than we think, then effective risk mitigation would be just a matter of finding the right punishment for an offense and the power of logical thinking would do the rest. Behavioral ethics teaches that that assumption is ill-founded.

The info is here.

Thursday, November 14, 2019

Assessing risk, automating racism

Embedded ImageRuha Benjamin
Science  25 Oct 2019:
Vol. 366, Issue 6464, pp. 421-422

Here is an excerpt:

Practically speaking, their finding means that if two people have the same risk score that indicates they do not need to be enrolled in a “high-risk management program,” the health of the Black patient is likely much worse than that of their White counterpart. According to Obermeyer et al., if the predictive tool were recalibrated to actual needs on the basis of the number and severity of active chronic illnesses, then twice as many Black patients would be identified for intervention. Notably, the researchers went well beyond the algorithm developers by constructing a more fine-grained measure of health outcomes, by extracting and cleaning data from electronic health records to determine the severity, not just the number, of conditions. Crucially, they found that so long as the tool remains effective at predicting costs, the outputs will continue to be racially biased by design, even as they may not explicitly attempt to take race into account. For this reason, Obermeyer et al. engage the literature on “problem formulation,” which illustrates that depending on how one defines the problem to be solved—whether to lower health care costs or to increase access to care—the outcomes will vary considerably.

Wednesday, September 11, 2019

Assessment of Patient Nondisclosures to Clinicians of Experiencing Imminent Threats

Levy AG, Scherer AM, Zikmund-Fisher BJ, Larkin K, Barnes GD, Fagerlin A.
JAMA Netw Open. Published online August 14, 20192(8):e199277.
doi:10.1001/jamanetworkopen.2019.9277

Question 

How common is it for patients to withhold information from clinicians about imminent threats that they face (depression, suicidality, abuse, or sexual assault), and what are common reasons for nondisclosure?

Findings 

This survey study, incorporating 2 national, nonprobability, online surveys of a total of 4,510 US adults, found that at least one-quarter of participants who experienced each imminent threat reported withholding this information from their clinician. The most commonly endorsed reasons for nondisclosure included potential embarrassment, being judged, or difficult follow-up behavior.

Meaning

These findings suggest that concerns about potential negative repercussions may lead many patients who experience imminent threats to avoid disclosing this information to their clinician.

Conclusion

This study reveals an important concern about clinician-patient communication: if patients commonly withhold information from clinicians about significant threats that they face, then clinicians are unable to identify and attempt to mitigate these threats. Thus, these results highlight the continued need to develop effective interventions that improve the trust and communication between patients and their clinicians, particularly for sensitive, potentially life-threatening topics.

Monday, November 5, 2018

We Need To Examine The Ethics And Governance Of Artificial Intelligence

Nikita Malik
forbes.com
Originally posted October 4, 2018

Here is an excerpt:

The second concern is on regulation and ethics. Research teams at MIT and Harvard are already looking into the fast-developing area of AI to map the boundaries within which sensitive but important data can be used. Who determines whether this technology can save lives, for example, versus the very real risk of veering into an Orwellian dystopia?

Take artificial intelligence systems that have the ability to predicate a crime based on an individual’s history, and their propensity to do harm. Pennsylvania could be one of the first states in the United States to base criminal sentences not just on the crimes people are convicted of, but also on whether they are deemed likely to commit additional crimes in the future. Statistically derived risk assessments – based on factors such as age, criminal record, and employment, will help judges determine which sentences to give. This would help reduce the cost of, and burden on, the prison system.

Risk assessments – which have existed for a long time - have been used in other areas such as the prevention of terrorism and child sexual exploitation. In the latter category, existing human systems are so overburdened that children are often overlooked, at grave risk to themselves. Human errors in the case work of the severely abused child Gabriel Fernandez contributed to his eventual death at the hands of his parents, and a serious inquest into the shortcomings of the County Department of Children and Family Services in Los Angeles. Using artificial intelligence in vulnerability assessments of children could aid overworked caseworkers and administrators and flag errors in existing systems.

The info is here.

Friday, October 19, 2018

Risk Management Considerations When Treating Violent Patients

Kristen Lambert
Psychiatric News
Originally posted September 4, 2018

Here is an excerpt:

When a patient has a history of expressing homicidal ideation or has been violent previously, you should document, in every subsequent session, whether the patient admits or denies homicidal ideation. When the patient expresses homicidal ideation, document what he/she expressed and the steps you did or did not take in response and why. Should an incident occur, your documentation will play an important role in defending your actions.

Despite taking precautions, your patient may still commit a violent act. The following are some strategies that may minimize your risk.

  • Conduct complete timely/thorough risk assessments.
  • Document, including the reasons for taking and not taking certain actions.
  • Understand your state’s law on duty to warn. Be aware of the language in the law on whether you have a mandatory, permissive, or no duty to warn/protect.
  • Understand your state’s laws regarding civil commitment.
  • Understand your state’s laws regarding disclosure of confidential information and when you can do so.
  • Understand your state’s laws regarding discussing firearms ownership and/or possession with patients.
  • If you have questions, consult an attorney or risk management professional.

Friday, July 27, 2018

Morality in the Machines

Erick Trickery
Harvard Law Bulletin
Originally posted June 26, 2018

Here is an excerpt:

In February, the Harvard and MIT researchers endorsed a revised approach in the Massachusetts House’s criminal justice bill, which calls for a bail commission to study risk-assessment tools. In late March, the House-Senate conference committee included the more cautious approach in its reconciled criminal justice bill, which passed both houses and was signed into law by Gov. Charlie Baker in April.

Meanwhile, Harvard and MIT scholars are going still deeper into the issue. Bavitz and a team of Berkman Klein researchers are developing a database of governments that use risk scores to help set bail. It will be searchable to see whether court cases have challenged a risk-score tool’s use, whether that tool is based on peer-reviewed scientific literature, and whether its formulas are public.

Many risk-score tools are created by private companies that keep their algorithms secret. That lack of transparency creates due-process concerns, says Bavitz. “Flash forward to a world where a judge says, ‘The computer tells me you’re a risk score of 5 out of 7.’ What does it mean? I don’t know. There’s no opportunity for me to lift up the hood of the algorithm.” Instead, he suggests governments could design their own risk-assessment algorithms and software, using staff or by collaborating with foundations or researchers.

Students in the ethics class agreed that risk-score programs shouldn’t be used in court if their formulas aren’t transparent, according to then HLS 3L Arjun Adusumilli. “When people’s liberty interests are at stake, we really expect a certain amount of input, feedback and appealability,” he says. “Even if the thing is statistically great, and makes good decisions, we want reasons.”

The information is here.

Thursday, February 15, 2018

Engineers, philosophers and sociologists release ethical design guidelines for future technology

Rafael A Calvo and Dorian Peters
The Conversation
Originally posted December 12, 2017

Here is an excerpt:

The big questions posed by our digital future sit at the intersection of technology and ethics. This is complex territory that requires input from experts in many different fields if we are to navigate it successfully.

To prepare the report, economists and sociologists researched the effect of technology on disempowered groups. Lawyers considered the future of privacy and justice. Doctors and psychologists examined impacts on physical and mental health. Philosophers unpacked hidden biases and moral questions.

The report suggests all technologies should be guided by five general principles:

  • protecting human rights
  • prioritising and employing established metrics for measuring wellbeing
  • ensuring designers and operators of new technologies are accountable
  • making processes transparent
  • minimizing the risks of misuse.

Sticky questions

The report runs the spectrum from practical to more abstract concerns, touching on personal data ownership, autonomous weapons, job displacement and questions like “can decisions made by amoral systems have moral consequences?”

One section deals with a “lack of ownership or responsibility from the tech community”. It points to a divide between how the technology community sees its ethical responsibilities and the broader social concerns raised by public, legal, and professional communities.

The article is here.

Tuesday, February 13, 2018

How Should Physicians Make Decisions about Mandatory Reporting When a Patient Might Become Violent?

Amy Barnhorst, Garen Wintemute, and Marian Betz
AMA Journal of Ethics. January 2018, Volume 20, Number 1: 29-35.

Abstract

Mandatory reporting of persons believed to be at imminent risk for committing violence or attempting suicide can pose an ethical dilemma for physicians, who might find themselves struggling to balance various conflicting interests. Legal statutes dictate general scenarios that require mandatory reporting to supersede confidentiality requirements, but physicians must use clinical judgment to determine whether and when a particular case meets the requirement. In situations in which it is not clear whether reporting is legally required, the situation should be analyzed for its benefit to the patient and to public safety. Access to firearms can complicate these situations, as firearms are a well-established risk factor for violence and suicide yet also a sensitive topic about which physicians and patients might have strong personal beliefs.

The commentary is here.