Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Priorities. Show all posts
Showing posts with label Priorities. Show all posts

Saturday, November 18, 2023

Resolving the battle of short- vs. long-term AI risks

Sætra, H.S., Danaher, J.
AI Ethics (2023).

Abstract

AI poses both short- and long-term risks, but the AI ethics and regulatory communities are struggling to agree on how to think two thoughts at the same time. While disagreements over the exact probabilities and impacts of risks will remain, fostering a more productive dialogue will be important. This entails, for example, distinguishing between evaluations of particular risks and the politics of risk. Without proper discussions of AI risk, it will be difficult to properly manage them, and we could end up in a situation where neither short- nor long-term risks are managed and mitigated.


Here is my summary:

Artificial intelligence (AI) poses both short- and long-term risks, but the AI ethics and regulatory communities are struggling to agree on how to prioritize these risks. Some argue that short-term risks, such as bias and discrimination, are more pressing and should be addressed first, while others argue that long-term risks, such as the possibility of AI surpassing human intelligence and becoming uncontrollable, are more serious and should be prioritized.

Sætra and Danaher argue that it is important to consider both short- and long-term risks when developing AI policies and regulations. They point out that short-term risks can have long-term consequences, and that long-term risks can have short-term impacts. For example, if AI is biased against certain groups of people, this could lead to long-term inequality and injustice. Conversely, if we take steps to mitigate long-term risks, such as by developing safety standards for AI systems, this could also reduce short-term risks.

Sætra and Danaher offer a number of suggestions for how to better balance short- and long-term AI risks. One suggestion is to develop a risk matrix that categorizes risks by their impact and likelihood. This could help policymakers to identify and prioritize the most important risks. Another suggestion is to create a research agenda that addresses both short- and long-term risks. This would help to ensure that we are investing in the research that is most needed to keep AI safe and beneficial.

Monday, May 11, 2020

Why some nurses have quit during the coronavirus pandemic

Safia Samee Ali
nbcnews.com
Originally posted 10 May 20

Here is an excerpt:

“It was an extremely difficult decision, but as a mother and wife, the health of my family will always come first. In the end, I could not accept that I could be responsible for causing one of my family members to become severely ill or possibly die.”

As COVID-19 has infected more than one million Americans, nurses working on the front lines of the pandemic with little protective support have made the gut-wrenching decision to step away from their jobs, saying they were ill-equipped and unable to fight the disease and feared not only for their own safety but also for that of their families.

Many of these nurses, who have faced backlash for quitting, say new CDC protocols have made them feel expendable and have not kept their safety in mind, leaving them no choice but to walk away from a job they loved.

'We're not cannon fodder, we’re human beings'

As the nation took stock of its dwindling medical supplies in the early days of the pandemic, CDC guidance regarding personal protective equipment quickly took a back seat.

N95 masks, which had previously been the acceptable standard of protective care for both patients and medical personnel, were depleting so commercial grade masks, surgical masks, and in the most extreme cases homemade masks such as scarves and bandanas were all sanctioned by the CDC -- which did not return a request for comment -- to counter the lacking resources.

The info is here.

Tuesday, December 4, 2018

Letting tech firms frame the AI ethics debate is a mistake

Robert Hart
www.fastcompany.com
Originally posted November 2, 2018

Here is an excerpt:

Even many ethics-focused panel discussions–or manel discussions, as some call them–are pale, male, and stale. That is to say, they are made up predominantly of old, white, straight, and wealthy men. Yet these discussions are meant to be guiding lights for AI technologies that affect everyone.

A historical illustration is useful here. Consider polio, a disease that was declared global public health enemy number one after the successful eradication of smallpox decades ago. The “global” part is important. Although the movement to eradicate polio was launched by the World Health Assembly, the decision-making body of the United Nations’ World Health Organization, the eradication campaign was spearheaded primarily by groups in the U.S. and similarly wealthy countries. Promulgated with intense international pressure, the campaign distorted local health priorities in many parts of the developing world.

It’s not that the developing countries wanted their citizens to contract polio. Of course, they didn’t. It’s just that they would have rather spent the significant sums of money on more pressing local problems. In essence, one wealthy country imposed their own moral judgement on the rest of the world, with little forethought about the potential unintended consequences. The voices of a few in the West grew to dominate and overpower those elsewhere–a kind of ethical colonialism, if you will.

The info is here.

Friday, June 10, 2016

Decriminalizing Mental Illness — The Miami Model

John K. Iglehart
N Engl J Med 2016; 374:1701-1703

Here is an excerpt:

Miami-Dade’s initiative was launched in 2000, when Judge Leifman, frustrated by the fact that people with mental disorders were cycling through his court repeatedly, created the Eleventh Judicial Circuit Criminal Mental Health Project (CMHP). As Leifman explained, “When I became a judge . . . I had no idea I would become the gatekeeper to the largest psychiatric facility in the State of Florida. . . . Of the roughly 100,000 bookings into the [county] jail every year, nearly 20,000 involve people with serious mental illnesses requiring intensive psychiatric treatment while incarcerated. . . . Because community-based delivery systems are often fragmented, difficult to navigate, and slow to respond to critical needs, many individuals with the most severe and disabling forms of mental illnesses . . . fall through the cracks and land in the criminal justice or state hospital systems” that emphasize crisis resolution rather than “promoting ongoing stable recovery and community integration.”

The article is here.

Friday, October 17, 2014

Doctrine of Double Effect

By Alison McIntyre
The Stanford Encyclopedia of Philosophy
Winter 2014

The doctrine (or principle) of double effect is often invoked to explain the permissibility of an action that causes a serious harm, such as the death of a human being, as a side effect of promoting some good end. According to the principle of double effect, sometimes it is permissible to cause a harm as a side effect (or “double effect”) of bringing about a good result even though it would not be permissible to cause such a harm as a means to bringing about the same good end.

(cut)

For example, a physician's justification for administering drugs to relieve a patient's pain while foreseeing the hastening of death as a side effect does not depend only on the fact that the physician does not intend to hasten death. After all, physicians are not permitted to relieve the pain of kidney stones or childbirth with potentially lethal doses of opiates simply because they foresee but do not intend the causing of death as a side effect! A variety of substantive medical and ethical judgments provide the justificatory context: the patient is terminally ill, there is an urgent need to relieve pain and suffering, death is imminent, and the patient or the patient's proxy consents. Note that this last constraint, the consent of the patient or the patient's proxy, is not naturally classified as a concern with proportionality, understood as the weighing of harms and benefits.

The entry entry is here.