Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Accountability. Show all posts
Showing posts with label Accountability. Show all posts

Thursday, March 14, 2024

A way forward for responsibility in the age of AI

Gogoshin, D.L.
Inquiry (2024)

Abstract

Whatever one makes of the relationship between free will and moral responsibility – e.g. whether it’s the case that we can have the latter without the former and, if so, what conditions must be met; whatever one thinks about whether artificially intelligent agents might ever meet such conditions, one still faces the following questions. What is the value of moral responsibility? If we take moral responsibility to be a matter of being a fitting target of moral blame or praise, what are the goods attached to them? The debate concerning ‘machine morality’ is often hinged on whether artificial agents are or could ever be morally responsible, and it is generally taken for granted (following Matthias 2004) that if they cannot, they pose a threat to the moral responsibility system and associated goods. In this paper, I challenge this assumption by asking what the goods of this system, if any, are, and what happens to them in the face of artificially intelligent agents. I will argue that they neither introduce new problems for the moral responsibility system nor do they threaten what we really (ought to) care about. I conclude the paper with a proposal for how to secure this objective.


Here is my summary:

While AI may not possess true moral agency, it's crucial to consider how the development and use of AI can be made more responsible. The author challenges the assumption that AI's lack of moral responsibility inherently creates problems for our current system of ethics. Instead, they focus on the "goods" this system provides, such as deserving blame or praise, and how these can be upheld even with AI's presence. To achieve this, the author proposes several steps, including:
  1. Shifting the focus from AI's moral agency to the agency of those who design, build, and use it. This means holding these individuals accountable for the societal impacts of AI.
  2. Developing clear ethical guidelines for AI development and use. These guidelines should be comprehensive, addressing issues like fairness, transparency, and accountability.
  3. Creating robust oversight mechanisms. This could involve independent bodies that monitor AI development and use, and have the power to intervene when necessary.
  4. Promoting public understanding of AI. This will help people make informed decisions about how AI is used in their lives and hold developers and users accountable.

Sunday, March 10, 2024

MAGA’s Violent Threats Are Warping Life in America

David French
New York Times - Opinion
Originally published 18 Feb 24

Amid the constant drumbeat of sensational news stories — the scandals, the legal rulings, the wild political gambits — it’s sometimes easy to overlook the deeper trends that are shaping American life. For example, are you aware how much the constant threat of violence, principally from MAGA sources, is now warping American politics? If you wonder why so few people in red America seem to stand up directly against the MAGA movement, are you aware of the price they might pay if they did?

Late last month, I listened to a fascinating NPR interview with the journalists Michael Isikoff and Daniel Klaidman regarding their new book, “Find Me the Votes,” about Donald Trump’s efforts to overturn the 2020 election. They report that Georgia prosecutor Fani Willis had trouble finding lawyers willing to help prosecute her case against Trump. Even a former Georgia governor turned her down, saying, “Hypothetically speaking, do you want to have a bodyguard follow you around for the rest of your life?”

He wasn’t exaggerating. Willis received an assassination threat so specific that one evening she had to leave her office incognito while a body double wearing a bulletproof vest courageously pretended to be her and offered a target for any possible incoming fire.


Here is my summary of the article:

David French discusses the pervasive threat of violence, particularly from MAGA sources, and its impact on American politics. The author highlights instances where individuals faced intimidation and threats for opposing the MAGA movement, such as a Georgia prosecutor receiving an assassination threat and judges being swatted. The article also mentions the significant increase in threats against members of Congress since Trump took office, with Capitol Police opening over 8,000 threat assessments in a year. The piece sheds light on the chilling effect these threats have on individuals like Mitt Romney, who spends $5,000 per day on security, and lawmakers who fear for their families' safety. The overall narrative underscores how these violent threats are shaping American life and politics

Saturday, March 2, 2024

Unraveling the Mindset of Victimhood

Scott Barry Kaufman
Scientific American
Originally posted 29 June 2020

Here is an excerpt:

Constantly seeking recognition of one’s victimhood. Those who score high on this dimension have a perpetual need to have their suffering acknowledged. In general, this is a normal psychological response to trauma. Experiencing trauma tends to “shatter our assumptions” about the world as a just and moral place. Recognition of one’s victimhood is a normal response to trauma and can help reestablish a person’s confidence in their perception of the world as a fair and just place to live.

Also, it is normal for victims to want the perpetrators to take responsibility for their wrongdoing and to express feelings of guilt. Studies conducted on testimonies of patients and therapists have found that validation of the trauma is important for therapeutic recovery from trauma and victimization (see here and here).

A sense of moral elitism. Those who score high on this dimension perceive themselves as having an immaculate morality and view everyone else as being immoral. Moral elitism can be used to control others by accusing others of being immoral, unfair or selfish, while seeing oneself as supremely moral and ethical.

Moral elitism often develops as a defense mechanism against deeply painful emotions and as a way to maintain a positive self-image. As a result, those under distress tend to deny their own aggressiveness and destructive impulses and project them onto others. The “other” is perceived as threatening whereas the self is perceived as persecuted, vulnerable and morally superior.


Here is a summary:

Kaufman explores the concept of "interpersonal victimhood," a tendency to view oneself as the repeated target of unfair treatment by others. He identifies several key characteristics of this mindset, including:
  • Belief in inherent unfairness: The conviction that the world is fundamentally unjust and that one is disproportionately likely to experience harm.
  • Moral self-righteousness: The perception of oneself as more ethical and deserving of good treatment compared to others.
  • Rumination on past injustices: Dwelling on and replaying negative experiences, often with feelings of anger and resentment.
  • Difficulty taking responsibility: Attributing negative outcomes to external factors rather than acknowledging one's own role.
Kaufman argues that while acknowledging genuine injustices is important, clinging to a victimhood identity can be detrimental. It can hinder personal growth, strain relationships, and fuel negativity. He emphasizes the importance of developing a more balanced perspective, acknowledging both external challenges and personal agency. The article offers strategies for fostering resilience

Wednesday, February 28, 2024

Scientists are on the verge of a male birth-control pill. Will men take it?

Jill Filipovic
The Guardian
Originally posted 18 Dec 23

Here is an excerpt:

The overwhelming share of responsibility for preventing pregnancy has always fallen on women. Throughout human history, women have gone to great lengths to prevent pregnancies they didn’t want, and end those they couldn’t prevent. Safe and reliable contraceptive methods are, in the context of how long women have sought to interrupt conception, still incredibly new. Measured by the lifespan of anyone reading this article, though, they are well established, and have for many decades been a normal part of life for millions of women around the world.

To some degree, and if only for obvious biological reasons, it makes sense that pregnancy prevention has historically fallen on women. But it also, as they say, takes two to tango – and only one of the partners has been doing all the work. Luckily, things are changing: thanks to generations of women who have gained unprecedented freedoms and planned their families using highly effective contraception methods, and thanks to men who have shifted their own gender expectations and become more involved partners and fathers, women and men have moved closer to equality than ever.

Among politically progressive couples especially, it’s now standard to expect that a male partner will do his fair share of the household management and childrearing (whether he actually does is a separate question, but the expectation is there). What men generally cannot do, though, is carry pregnancies and birth babies.


Here are some themes worthy of discussion:

Shifting responsibility: The potential availability of a reliable male contraceptive marks a significant departure from the historical norm where the burden of pregnancy prevention was primarily borne by women. This shift raises thought-provoking questions that delve into various aspects of societal dynamics.

Gender equality: A crucial consideration is whether men will willingly share responsibility for contraception on an equal footing, or whether societal norms will continue to exert pressure on women to take the lead in this regard.

Reproductive autonomy: The advent of accessible male contraception prompts contemplation on whether it will empower women to exert greater control over their reproductive choices, shaping the landscape of family planning.

Informed consent: An important facet of this shift involves how men will be informed about potential side effects and risks associated with the male contraceptive, particularly in comparison to existing female contraceptives.

Accessibility and equity: Concerns emerge regarding equitable access to the male contraceptive, particularly for marginalized communities. Questions arise about whether affordable and culturally appropriate access will be universally available, regardless of socioeconomic status or geographic location.

Coercion: There is a potential concern that the availability of a male contraceptive might be exploited to coerce women into sexual activity without their full and informed consent.

Psychological and social impact: The introduction of a male contraceptive brings with it potential psychological and social consequences that may not be immediately apparent.

Changes in sexual behavior: The availability of a male contraceptive may influence sexual practices and attitudes towards sex, prompting a reevaluation of societal norms.

Impact on relationships: The shift in responsibility for contraception could potentially cause tension or conflict in existing relationships as couples navigate the evolving dynamics.

Masculinity and stigma: The use of a male contraceptive may challenge traditional notions of masculinity, possibly leading to social stigma that individuals using the contraceptive may face.

Friday, February 2, 2024

Young people turning to AI therapist bots

Joe Tidy
BBC.com
Originally posted 4 Jan 24

Here is an excerpt:

Sam has been so surprised by the success of the bot that he is working on a post-graduate research project about the emerging trend of AI therapy and why it appeals to young people. Character.ai is dominated by users aged 16 to 30.

"So many people who've messaged me say they access it when their thoughts get hard, like at 2am when they can't really talk to any friends or a real therapist,"
Sam also guesses that the text format is one with which young people are most comfortable.
"Talking by text is potentially less daunting than picking up the phone or having a face-to-face conversation," he theorises.

Theresa Plewman is a professional psychotherapist and has tried out Psychologist. She says she is not surprised this type of therapy is popular with younger generations, but questions its effectiveness.

"The bot has a lot to say and quickly makes assumptions, like giving me advice about depression when I said I was feeling sad. That's not how a human would respond," she said.

Theresa says the bot fails to gather all the information a human would and is not a competent therapist. But she says its immediate and spontaneous nature might be useful to people who need help.
She says the number of people using the bot is worrying and could point to high levels of mental ill health and a lack of public resources.


Here are some important points-

Reasons for appeal:
  • Cost: Traditional therapy's expense and limited availability drive some towards bots, seen as cheaper and readily accessible.
  • Stigma: Stigma associated with mental health might make bots a less intimidating first step compared to human therapists.
  • Technology familiarity: Young people, comfortable with technology, find text-based interaction with bots familiar and less daunting than face-to-face sessions.
Concerns and considerations:
  • Bias: Bots trained on potentially biased data might offer inaccurate or harmful advice, reinforcing existing prejudices.
  • Qualifications: Lack of professional mental health credentials and oversight raises concerns about the quality of support provided.
  • Limitations: Bots aren't replacements for human therapists. Complex issues or severe cases require professional intervention.

Monday, November 6, 2023

Abuse Survivors ‘Disgusted’ by Southern Baptist Court Brief

Bob Smietana
Christianity Today
Originally published 26 OCT 23

Here is an excerpt:

Members of the Executive Committee, including Oklahoma pastor Mike Keahbone, expressed dismay at the brief, saying he and other members of the committee were blindsided by it. Keahbone, a member of a task force implementing abuse reforms in the SBC, said the brief undermined survivors such as Thigpen, Woodson, and Lively, who have supported the reforms.

“We’ve had survivors that have been faithful to give us a chance,” he told Religion News Service in a phone interview. “And we hurt them badly.”

The controversy over the amicus brief is the latest crisis for leaders of the nation’s largest Protestant denomination, which has dealt with a revolving door of leaders and rising legal costs in the aftermath of a sexual abuse crisis in recent years.

The denomination passed abuse reforms in 2022 but has been slow to implement them, relying mostly on a volunteer task force charged with convincing the SBC’s 47,000 congregations and a host of state and national entities to put those reforms into practice. Those delays have led survivors to be skeptical that things would actually change.

Earlier this week, ­the Louisville Courier Journal reported that lawyers for the Executive Committee, Southern Baptist Theological Seminary—the denomination’s flagship seminary in Louisville—and Lifeway had filed the amicus brief earlier this year in a case brought by abuse survivor Samantha Killary.


Here is my summary: 

In October 2023, the Southern Baptist Convention (SBC) filed an amicus curiae brief in the Kentucky Supreme Court arguing that a new law extending the statute of limitations for child sexual abuse claims should not apply retroactively. This filing sparked outrage among abuse survivors and some SBC leaders, who accused the denomination of prioritizing its own legal interests over the needs of victims.

The SBC's brief was filed in response to a lawsuit filed by a woman who was sexually abused as a child by a Louisville police officer. The woman is seeking to sue the city of Louisville and the police department, arguing that they should be held liable for her abuse because they failed to protect her.

The SBC's brief argues that the new statute of limitations should not apply retroactively because it would create a "windfall" for abuse survivors who would not have been able to sue under the previous law. The brief also argues that applying the new law retroactively would be unfair to institutions like the SBC, which could be faced with a flood of lawsuits.

Abuse survivors and some SBC leaders have criticized the brief as being insensitive to the needs of victims. They argue that the SBC is more interested in protecting itself from lawsuits than in ensuring that victims of abuse are able to seek justice.

In a joint statement, three abuse survivors said they were "sickened and saddened to be burned yet again by the actions of the SBC against survivors." They accused the SBC of "proactively choosing to side against a survivor and with an abuser and the institution that enabled his abuse."

Saturday, September 9, 2023

Academics Raise More Than $315,000 for Data Bloggers Sued by Harvard Business School Professor Gino

Neil H. Shah & Claire Yuan
The Crimson
Originally published 1 Sept 23

A group of academics has raised more than $315,000 through a crowdfunding campaign to support the legal expenses of the professors behind data investigation blog Data Colada — who are being sued for defamation by Harvard Business School professor Francesca Gino.

Supporters of the three professors — Uri Simonsohn, Leif D. Nelson, and Joseph P. Simmons — launched the GoFundMe campaign to raise funds for their legal fees after they were named in a $25 million defamation lawsuit filed by Gino last month.

In a series of four blog posts in June, Data Colada gave a detailed account of alleged research misconduct by Gino across four academic papers. Two of the papers were retracted following the allegations by Data Colada, while another had previously been retracted in September 2021 and a fourth is set to be retracted in September 2023.

Organizers wrote on GoFundMe that the fundraiser “hit 2,000 donors and $250K in less than 2 days” and that Simonsohn, Nelson, and Simmons “are deeply moved and grateful for this incredible show of support.”

Simine Vazire, one of the fundraiser’s organizers, said she was “pleasantly surprised” by the reaction throughout academia in support of Data Colada.

“It’s been really nice to see the consensus among the academic community, which is strikingly different than what I see on LinkedIn and the non-academic community,” she said.

Elisabeth M. Bik — a data manipulation expert who also helped organize the fundraiser — credited the outpouring of financial support to solidarity and concern among scientists.

“People are very concerned about this lawsuit and about the potential silencing effect this could have on people who criticize other people’s papers,” Bik said. “I think a lot of people want to support Data Colada for their legal defenses.”

Andrew T. Miltenberg — one of Gino’s attorneys — wrote in an emailed statement that the lawsuit is “not an indictment on Data Colada’s mission.”

Wednesday, August 16, 2023

A Federal Judge Asks: Does the Supreme Court Realize How Bad It Smells?

Michael Ponsor
The New York Times: Opinion
Originally posted 14 July 23

What has gone wrong with the Supreme Court’s sense of smell?

I joined the federal bench in 1984, some years before any of the justices currently on the Supreme Court. Throughout my career, I have been bound and guided by a written code of conduct, backed by a committee of colleagues I can call on for advice. In fact, I checked with a member of that committee before writing this essay.

A few times in my nearly 40 years on the bench, complaints have been filed against me. This is not uncommon for a federal judge. So far, none have been found to have merit, but all of these complaints have been processed with respect, and I have paid close attention to them.

The Supreme Court has avoided imposing a formal ethical apparatus on itself like the one that applies to all other federal judges. I understand the general concern, in part. A complaint mechanism could become a political tool to paralyze the court or a playground for gadflies. However, a skillfully drafted code could overcome this problem. Even a nonenforceable code that the justices formally pledged to respect would be an improvement on the current void.

Reasonable people may disagree on this. The more important, uncontroversial point is that if there will not be formal ethical constraints on our Supreme Court — or even if there will be — its justices must have functioning noses. They must keep themselves far from any conduct with a dubious aroma, even if it may not breach a formal rule.

The fact is, when you become a judge, stuff happens. Many years ago, as a fairly new federal magistrate judge, I was chatting about our kids with a local attorney I knew only slightly. As our conversation unfolded, he mentioned that he’d been planning to take his 10-year-old to a Red Sox game that weekend but their plan had fallen through. Would I like to use his tickets?

Sunday, July 23, 2023

How to Use AI Ethically for Ethical Decision-Making

Demaree-Cotton, J., Earp, B. D., & Savulescu, J.
(2022). American Journal of Bioethics, 22(7), 1–3.

Here is an excerpt:

The  kind  of AI  proposed  by  Meier  and  colleagues  (2022) has  the  fascinating  potential to improve the transparency of ethical decision-making, at least if it is used as a decision aid rather than a decision replacement (Savulescu & Maslen 2015). While artificial intelligence cannot itself engage in the human communicative process of justifying its decisions to patients, the AI they describe (unlike “black-box” AI) makes explicit which values and principles are involved and how much weight they are given.

By contrast, the moral principles or values underlying human moral intuition are not always consciously, introspectively accessible (Cushman, Young, and  Hauser  2006).  While humans sometimes have a fuzzy, intuitive sense of some of the factors that are relevant to their moral judgment, we often have strong moral intuitions without being sure of their source,  or  with- out being clear on precisely how strongly different  factors  played  a  role in  generating  the intuitions.  But  if  clinicians  make use  of  the AI  as a  decision  aid, this  could help  them  to transparently and precisely communicate the actual reasons behind their decision.

This is so even if the AI’s recommendation is ultimately rejected. Suppose, for example, that the AI recommends a course of action, with a certain amount of confidence, and it specifies the exact values or  weights it has assigned  to  autonomy versus  beneficence  in coming  to  this conclusion. Evaluating the recommendation made by the AI could help a committee make more explicit the “black box” aspects  of their own reasoning.  For example, the committee might decide that beneficence should actually be  weighted more heavily in this case than the AI suggests. Being able to understand the reason that their decision diverges from that of the AI gives them the opportunity to offer a further justifying reason as to why they think beneficence should be given more weight;  and  this,  in  turn, could improve the  transparency of their recommendation. 

However, the potential for the kind of AI described in the target article to improve the accuracy of moral decision-making may be more limited. This is so for two reasons. Firstly, whether AI can be expected to outperform human decision-making depends in part on the metrics used to train it. In non-ethical domains, superior accuracy can be achieved because the “verdicts” given to the AI in the training phase are not solely the human judgments that the AI is intended to replace or inform. Consider how AI can learn to detect lung cancer from scans at a superior rate to human radiologists after being trained on large datasets and being “told” which scans show cancer and which ones are cancer-free. Importantly, this training includes cases where radiologists  did  not  recognize  cancer  in  the  early  scans  themselves,  but  where  further information verified the correct diagnosis later on (Ardila et al. 2019). Consequently, these AIs are  able  to  detect  patterns  even in  early  scans  that  are  not  apparent  or  easily detectable  by human radiologists, leading to superior accuracy compared to human performance.  

Saturday, July 22, 2023

Generative AI companies must publish transparency reports

A. Narayanan and S. Kapoor
Knight First Amendment Institute
Originally published 26 June 23

Here is an excerpt:

Transparency reports must cover all three types of harms from AI-generated content

There are three main types of harms that may result from model outputs.

First, generative AI tools could be used to harm others, such as by creating non-consensual deepfakes or child sexual exploitation materials. Developers do have policies that prohibit such uses. For example, OpenAI's policies prohibit a long list of uses, including the use of its models to generate unauthorized legal, financial, or medical advice for others. But these policies cannot have real-world impact if they are not enforced, and due to platforms' lack of transparency about enforcement, we have no idea if they are effective. Similar challenges in ensuring platform accountability have also plagued social media in the past; for instance, ProPublica reporters repeatedly found that Facebook failed to fully remove discriminatory ads from its platform despite claiming to have done so.

Sophisticated bad actors might use open-source tools to generate content that harms others, so enforcing use policies can never be a comprehensive solution. In a recent essay, we argued that disinformation is best addressed by focusing on its distribution (e.g., on social media) rather than its generation. Still, some actors will use tools hosted in the cloud either due to convenience or because the most capable models don’t tend to be open-source. For these reasons, transparency is important for cloud-based generative AI.

Second, users may over-rely on AI for factual information, such as legal, financial, or medical advice. Sometimes they are simply unaware of the tendency of current chatbots to frequently generate incorrect information. For example, a user might ask "what are the divorce laws in my state?" and not know that the answer is unreliable. Alternatively, the user might be harmed because they weren’t careful enough to verify the generated information, despite knowing that it might be inaccurate. Research on automation bias shows that people tend to over-rely on automated tools in many scenarios, sometimes making more errors than when not using the tool.

ChatGPT includes a disclaimer that it sometimes generates inaccurate information. But OpenAI has often touted its performance on medical and legal exams. And importantly, the tool is often genuinely useful at medical diagnosis or legal guidance. So, regardless of whether it’s a good idea to do so, people are in fact using it for these purposes. That makes harm reduction important, and transparency is an important first step.

Third, generated content could be intrinsically undesirable. Unlike the previous types, here the harms arise not because of users' malice, carelessness, or lack of awareness of limitations. Rather, intrinsically problematic content is generated even though it wasn’t requested. For example, Lensa's avatar creation app generated sexualized images and nudes when women uploaded their selfies. Defamation is also intrinsically harmful rather than a matter of user responsibility. It is no comfort to the target of defamation to say that the problem would be solved if every user who might encounter a false claim about them were to exercise care to verify it.


Quick summary: 

The call for transparency reports aims to increase accountability and understanding of the inner workings of generative AI models. By disclosing information about the data used to train the models, the companies can address concerns regarding potential biases and ensure the ethical use of their technology.

Transparency reports could include details about the sources and types of data used, the demographics represented in the training data, any data augmentation techniques applied, and potential biases detected or addressed during model development. This information would enable users, policymakers, and researchers to evaluate the capabilities and limitations of the generative AI systems.

Monday, February 20, 2023

Definition drives design: Disability models and mechanisms of bias in AI technologies

Newman-Griffis, D., et al. (2023).
First Monday, 28(1).
https://doi.org/10.5210/fm.v28i1.12903

Abstract

The increasing deployment of artificial intelligence (AI) tools to inform decision-making across diverse areas including healthcare, employment, social benefits, and government policy, presents a serious risk for disabled people, who have been shown to face bias in AI implementations. While there has been significant work on analysing and mitigating algorithmic bias, the broader mechanisms of how bias emerges in AI applications are not well understood, hampering efforts to address bias where it begins. In this article, we illustrate how bias in AI-assisted decision-making can arise from a range of specific design decisions, each of which may seem self-contained and non-biasing when considered separately. These design decisions include basic problem formulation, the data chosen for analysis, the use the AI technology is put to, and operational design elements in addition to the core algorithmic design. We draw on three historical models of disability common to different decision-making settings to demonstrate how differences in the definition of disability can lead to highly distinct decisions on each of these aspects of design, leading in turn to AI technologies with a variety of biases and downstream effects. We further show that the potential harms arising from inappropriate definitions of disability in fundamental design stages are further amplified by a lack of transparency and disabled participation throughout the AI design process. Our analysis provides a framework for critically examining AI technologies in decision-making contexts and guiding the development of a design praxis for disability-related AI analytics. We put forth this article to provide key questions to facilitate disability-led design and participatory development to produce more fair and equitable AI technologies in disability-related contexts.

Conclusion

The proliferation of artificial intelligence (AI) technologies as behind the scenes tools to support decision-making processes presents significant risks of harm for disabled people. The unspoken assumptions and unquestioned preconceptions that inform AI technology development can serve as mechanisms of bias, building the base problem formulation that guides a technology on reductive and harmful conceptualisations of disability. As we have shown, even when developing AI technologies to address the same overall goal, different definitions of disability can yield highly distinct analytic technologies that reflect contrasting, frequently incompatible decisions in the information to analyse, what analytic process to use, and what the end product of analysis will be. Here we have presented an initial framework to support critical examination of specific design elements in the formulation of AI technologies for data analytics, as a tool to examine the definitions of disability used in their design and the resulting impacts on the technology. We drew on three important historical models of disability that form common foundations for policy, practice, and personal experience today—the medical, social, and relational models—and two use cases in healthcare and government benefits to illustrate how different ways of conceiving of disability can yield technologies that contrast and conflict with one another, creating distinct risks for harm.

Tuesday, January 10, 2023

San Francisco will allow police to deploy robots that kill

Janie Har
Associated Press
Originally posted 29 Nov 22

Supervisors in San Francisco voted Tuesday to give city police the ability to use potentially lethal, remote-controlled robots in emergency situations -- following an emotionally charged debate that reflected divisions on the politically liberal board over support for law enforcement.

The vote was 8-3, with the majority agreeing to grant police the option despite strong objections from civil liberties and other police oversight groups. Opponents said the authority would lead to the further militarization of a police force already too aggressive with poor and minority communities.

Supervisor Connie Chan, a member of the committee that forwarded the proposal to the full board, said she understood concerns over use of force but that “according to state law, we are required to approve the use of these equipments. So here we are, and it’s definitely not a easy discussion.”

The San Francisco Police Department said it does not have pre-armed robots and has no plans to arm robots with guns. But the department could deploy robots equipped with explosive charges “to contact, incapacitate, or disorient violent, armed, or dangerous suspect” when lives are at stake, SFPD spokesperson Allison Maxie said in a statement.

“Robots equipped in this manner would only be used in extreme circumstances to save or prevent further loss of innocent lives,” she said.

Supervisors amended the proposal Tuesday to specify that officers could use robots only after using alternative force or de-escalation tactics, or concluding they would not be able to subdue the suspect through those alternative means. Only a limited number of high-ranking officers could authorize use of robots as a deadly force option.

Thursday, January 5, 2023

The Supreme Court Needs Real Oversight

Glen Fine
The Atlantic
Originally posted 5 DEC 22

Here is an excerpt:

The lack of ethical rules that bind the Court is the first problem—and the easier one to address. The Code of Conduct for United States Judges, promulgated by the federal courts’ Judicial Conference, “prescribes ethical norms for federal judges as a means to preserve the actual and apparent integrity of the federal judiciary.” The code covers judicial conduct both on and off the bench, including requirements that judges act at all times to promote public confidence in the integrity and impartiality of the judiciary. But this code applies only to lower-level federal judges, not to the Supreme Court, which has not issued ethical rules that apply to its own conduct. The Court should explicitly adopt this code or a modified one.

Chief Justice Roberts has noted that Supreme Court justices voluntarily consult the Code of Conduct and other ethical rules for guidance. He has also pointed out that the justices can seek ethical advice from a variety of sources, including the Court’s Legal Office, the Judicial Conference’s Committee on Codes of Conduct, and their colleagues. But this is voluntary, and each justice decides independently whether and how ethical rules apply in any particular case. No one—including the chief justice—has the ability to alter a justice’s self-judgment.

Oversight of the judiciary is a more difficult issue, involving separation-of-powers concerns. I was the inspector general of the Department of Justice for 11 years and the acting inspector general of the Department of Defense for four years; I saw the importance and challenges of oversight in two of the most important government agencies. I also experienced the difficulties in conducting complex investigations of alleged misconduct, including leak investigations. But as I wrote in a Brookings Institution article this past May after the Dobbs leak, the Supreme Court does not have the internal capacity to effectively investigate such leaks, and it would benefit from a skilled internal investigator, like an inspector general, to help oversee the Court and the judiciary.

Another example of the Court’s ineffective self-policing and lack of transparency involves its recusal decisions. For example, Justice Thomas’s wife, Virginia Thomas, has argued that the 2020 presidential election was stolen, sent text messages to former White House Chief of Staff Mark Meadows urging him and the White House to seek to overturn the election, and expressed support for the pro-Trump January 6 rally on the Ellipse. Nevertheless, Justice Thomas has not recused himself in cases relating to the subsequent attack on the Capitol.

Notably, Thomas was the only justice to dissent from the Court’s decision not to block the release to the January 6 committee of White House records related to the attack, which included his wife’s texts. Some legal experts have argued that this is a clear instance where recusal should have occurred. Statute 28 U.S.C. 455 requires federal judges, including Supreme Court justices, to recuse themselves from a case when they know that their spouse has any interest that could be substantially affected by the outcome. In addition, the statute requires justices and judges to disqualify themselves in any proceeding in which their impartiality may reasonably be questioned.

Tuesday, November 29, 2022

The Supreme Court has lost its ethical compass. Can it find one fast?

Ruth Marcus
The Washington Post
Originally published 23 Nov 22

The Supreme Court must get its ethics act together, and Chief Justice John G. Roberts Jr. needs to take the lead. After a string of embarrassments, the justices should finally subject themselves to the kind of rules that govern other federal judges and establish a standard for when to step aside from cases — one that is more stringent than simply leaving it up to the individual justice to decide.

Recent episodes are alarming and underscore the need for quick action to help restore confidence in the institution.

Last week, the Supreme Court wisely rebuffed an effort by Arizona GOP chair Kelli Ward to prevent the House Jan. 6 committee — the party in this case — from obtaining her phone records. The court’s brief order noted that Justice Clarence Thomas, along with Justice Samuel A. Alito Jr., would have sided with Ward.

Thomas’s involvement, though it didn’t affect the outcome of the dispute, is nothing short of outrageous. Federal law already requires judges, including Supreme Court justices, to step aside from involvement in any case in which their impartiality “might reasonably be questioned.”

Perhaps back in January, when he was the only justice to disagree when the court refused to grant former president Donald Trump’s bid to stop his records from being turned over to the Jan. 6 committee, Thomas didn’t realize the extent of his wife’s involvement with disputing the election results. (I’m being kind here: Ginni Thomas had signed a letter the previous month calling on House Republicans to expel Reps. Liz Cheney of Wyoming and Adam Kinzinger of Illinois from the House Republican Conference for participating in an “overtly partisan political persecution.”)

But here’s what we know now, and Justice Thomas does, too: The Jan 6. committee has subpoenaed and interviewed his wife. We — and he — know that she contacted 29 Arizona lawmakers, urging them to “fight back against fraud” and choose a “clean slate of electors” after the 2020 election.

Some recusal questions are close. Not this one. Did the chief justice urge Thomas to recuse? He should have. This will sound unthinkable, but if Roberts asked and Thomas refused, maybe it’s time the chief, or other justices, to publicly note their disagreement.

(cut)

One obvious step is to follow the ethics rules that apply to other federal judges, perhaps adapting them to the particular needs of the high court. That would send an important — and overdue — message that the justices are not a law unto themselves. It’s symbolic, but symbolism matters.

Tuesday, August 2, 2022

How to end cancel culture

Jennifer Stefano
Philadelphia Inquirer
Originally posted 25 JUL 22

Here is an excerpt:

Radical politics requires radical generosity toward those with whom we disagree — if we are to remain a free and civil society that does not descend into violence. Are we not a people defined by the willingness to spend our lives fighting against what another has said, but give our lives to defend her right to say it? Instead of being hypersensitive fragilistas, perhaps we could give that good old-fashioned American paradox a try again.

But how? Start by engaging in the democratic process by first defending people’s right to be awful. Then use that right to point out just how awful someone’s words or deeds are. Accept that you have freedom of speech, not freedom from offense. A free society best holds people accountable in the arena of ideas. When we trade debate for the dehumanizing act of cancellation, we head down a dangerous path — even if the person who would be canceled has behaved in a dehumanizing way toward others.

Canceling those with opinions most people deem morally wrong and socially unacceptable (racism, misogyny) leads to a permissiveness in simply labeling speech we do not like as those very things without any reason or recourse. Worse, cancel culture is creating a society where dissenting or unpopular opinions become a risk. Canceling isn’t about debate but dehumanizing.

Speech is free. The consequences are not. Actress Constance Wu attempted suicide after she was canceled in 2019 for publicly tweeting she didn’t love her job on a hit TV show. Her words harmed no one, but she was publicly excoriated for them. Private DMs from her fellow Asian actresses telling her she was a “blight” on the Asian American community made her believe she didn’t deserve to live. Wu didn’t lose her job for her words, but she nearly lost her life.

Cancel culture does more than make the sinner pay a penance. It offers none of the healing redemption necessary for a free and civil society. In America, we have always believed in second chances. It is the basis for the bipartisan work on issues like criminal justice reform. Our achievements here have been a bright spot.

We as a civil society want to give the formerly incarcerated a second chance. How about doing the same for each other?

Monday, June 13, 2022

San Diego doctor who smuggled hydroxychloroquine into US, sold medication as a COVID-19 cure sentenced

Hope Sloop
KSWB-TV San Diego
Originally posted 29 MAY 22

A San Diego doctor was sentenced Friday to 30 days of custody and one year of house arrest for attempting to smuggle hydroxychloroquine into the U.S. and sell COVID-19 "treatment kits" at the beginning of the pandemic.  

According to officials with the U.S. Department of Justice, Jennings Ryan Staley attempted to sell what he described as a "medical cure" for the coronavirus, which was really hydroxychloroquine powder that the physician had imported in from China by mislabeling the shipping container as "yam extract." Staley had attempted to replicate this process with another seller at one point, as well, but the importer told the San Diego doctor that they "must do it legally." 

Following the arrival of his shipment of the hydroxychloroquine powder, Staley solicited investors to help fund his operation to sell the filled capsules as a "medical cure" for COVID-19. The SoCal doctor told potential investors that he could triple their money within 90 days.  

Staley also told investigators via his plea agreement that he had written false prescriptions for hydroxychloroquine, using his associate's name and personal details without the employee's consent or knowledge.  

During an undercover operation, an agent purchased six of Staley's "treatment kits" for $4,000 and, during a recorded phone call, the doctor bragged about the efficacy of the kits and said, "I got the last tank of . . . hydroxychloroquine, smuggled out of China."  

Thursday, February 10, 2022

Four Responsibility Gaps with Artificial Intelligence: Why they Matter and How to Address them

Santoni de Sio, F., Mecacci, G. 
Philos. Technol. 34, 1057–1084 (2021). 
https://doi.org/10.1007/s13347-021-00450-x

Abstract

The notion of “responsibility gap” with artificial intelligence (AI) was originally introduced in the philosophical debate to indicate the concern that “learning automata” may make more difficult or impossible to attribute moral culpability to persons for untoward events. Building on literature in moral and legal philosophy, and ethics of technology, the paper proposes a broader and more comprehensive analysis of the responsibility gap. The responsibility gap, it is argued, is not one problem but a set of at least four interconnected problems – gaps in culpability, moral and public accountability, active responsibility—caused by different sources, some technical, other organisational, legal, ethical, and societal. Responsibility gaps may also happen with non-learning systems. The paper clarifies which aspect of AI may cause which gap in which form of responsibility, and why each of these gaps matter. It proposes a critical review of partial and non-satisfactory attempts to address the responsibility gap: those which present it as a new and intractable problem (“fatalism”), those which dismiss it as a false problem (“deflationism”), and those which reduce it to only one of its dimensions or sources and/or present it as a problem that can be solved by simply introducing new technical and/or legal tools (“solutionism”). The paper also outlines a more comprehensive approach to address the responsibility gaps with AI in their entirety, based on the idea of designing socio-technical systems for “meaningful human control", that is systems aligned with the relevant human reasons and capacities.

(cut)

The Tracing Conditions and its Payoffs for Responsibility

Unlike proposals based on new forms of legal liability, MHC (Meaningful Human Control) proposes that socio-technical systems are also systematically designed to avoid gaps in moral culpability, accountability, and active responsibility. The “tracing condition” proposes that a system can remain under MHC only in the presence of a solid alignment between the system and the technical, motivational, moral capacities of the relevant agents involved, with different roles, in the design, control, and use of the system. The direct goal of this condition is promoting a fair distribution of moral culpability, thereby avoiding two undesired results: first, scapegoating, i.e. agents being held culpable without having a fair capacity to avoid wrongdoing (Elish, 2019): in the example of the automated driving systems above, for instance, the drivers’ relevant technical and motivational capacities not being sufficiently studied and trained. Second, impunity for avoidable accidents, i.e. culpability gaps: the impossibility to legitimately blame anybody as no individual agent possesses all the relevant capacities, e.g. the managers/designers having the technical capacity but not the moral motivation to avoid accidents and the drivers having the motivation but not the skills. The tracing condition also helps addressing accountability and active responsibility gaps. If a person or organisation should be morally or publicly accountable, then they must also possess the specific capacity to discharge this duty: according to another example discussed above, if a doctor has to remain accountable to their patients for her decisions, then she should maintain the capacity and motivation to understand the functioning of the AI system she uses and to explain her decision to the patients.

Monday, July 26, 2021

Do doctors engaging in advocacy speak for themselves or their profession?

Elizabeth Lanphier
Journal of Medical Ethics Blog
Originally posted 17 June 21

Here is an excerpt:

My concern is not the claim that expertise should be shared. (It should!) Nor do I think there is any neat distinction between physician responsibilities for individual health and public health. But I worry that when Strous and Karni alternately frame physician duties to “speak out” as individual duties and collective ones, they collapse necessary distinctions between the risks, benefits, and demands of these two types of obligations.

Many of us have various role-based individual responsibilities. We can have obligations as a parent, as a citizen, or as a professional. Having an individual responsibility as a physician involves duties to your patients, but also general duties to care in the event you are in a situation in which your expertise is needed (the “is there a doctor on this flight?” scenario).

Collective responsibility, on the other hand, is when a group has a responsibility as a group. The philosophical literature debates hard to resolve questions about what it means to be a “group,” and how groups come to have or discharge responsibilities. Collective responsibility raises complicated questions like: If physicians have a collective responsibility to speak out during the COVID-19 pandemic, does every physician has such an obligation? Does any individual physician?

Because individual obligations attribute duties to specific persons responsible for carrying them out in ways collective duties tend not to, I why individual physician obligations are attractive. But this comes with risks. One risk is that a physician speaks out as an individual, appealing to the authority of their medical credentials, but not in alignment with their profession.

In my essay I describe a family physician inviting his extended family for a holiday meal during a peak period of SARS-CoV-2 transmission because he didn’t think COVID-19 was a “big deal.”

More infamously, Dr. Scott Atlas served as Donald J. Trump’s coronavirus advisor, and although he is a physician, he did not have experience in public health, infectious disease, or critical care medicine applicable to COVID-19. Atlas was a physician speaking as a physician, but he routinely promoted views starkly different than those of physicians with expertise relevant to the pandemic, and the guidance coming from scientific and medical communities.

Tuesday, May 18, 2021

Moderators of The Liking Bias in Judgments of Moral Character

Bocian, K. Baryla, W. & Wojciszke, B. 
Personality and Social Psychology Bulletin. 
(2021)

Abstract 

Previous research found evidence for a liking bias in moral character judgments because judgments of liked people are higher than those of disliked or neutral ones. The present article sought conditions moderating this effect. In Study 1 (N = 792), the impact of the liking bias on moral character judgments was strongly attenuated when participants were educated that attitudes bias moral judgments. In Study 2 (N = 376), the influence of liking on moral character attributions was eliminated when participants were accountable for the justification of their moral judgments. Overall, these results suggest that even though liking biases moral character attributions, this bias might be reduced or eliminated when deeper information processing is required to generate judgments of others’ moral character. Keywords: moral judgments, moral character, attitudes, liking bias, accountability.

General Discussion

In this research, we sought to replicate the past results that demonstrated the influence of liking on moral character judgments, and we investigated conditions that could limit this influence. We demonstrated that liking elicited by similarity (Study 1) and mimicry (Study 2) biases the perceptions of another person’s moral character. Thus, we corroborated previous findings by Bocian et al. (2018), who found that attitudes bias moral judgments. More importantly, we showed conditions that moderate the liking bias. Specifically, in Study 1, we found evidence that forewarning participants that liking can bias moral character judgments weaken the liking bias two times. In Study 2, we demonstrated that the liking bias was eliminated when we made participants accountable for their moral decisions. 

By systematically examining the conditions that reduce the liking influences on moral character attributions, we built on and extended the past work in the area of moral cognition and biases reduction. First, while past studies have focused on the impact of accountability on the fundamental attribution error (Tetlock, 1985), overconfidence (Tetlock & Kim, 1987), or order of information (Schadewald & Limberg, 1992), we examined the effectiveness of accountability in debiasing moral judgments. Thus, we demonstrated that biased moral judgments could be effectively corrected when people are obliged to justify their judgments to others. Second, we showed that educating people that attitudes might bias their moral judgments, to some extent, effectively helped them debiased their moral character judgments. We thus extended the past research on the effectiveness of forewarning people of biases in social judgment and decision-making (Axt et al., 2018; Hershberger et al., 1997) to biases in moral judgments. 

Saturday, May 15, 2021

Moral zombies: why algorithms are not moral agents

Véliz, C. 
AI & Soc (2021). 

Abstract

In philosophy of mind, zombies are imaginary creatures that are exact physical duplicates of conscious subjects for whom there is no first-personal experience. Zombies are meant to show that physicalism—the theory that the universe is made up entirely out of physical components—is false. In this paper, I apply the zombie thought experiment to the realm of morality to assess whether moral agency is something independent from sentience. Algorithms, I argue, are a kind of functional moral zombie, such that thinking about the latter can help us better understand and regulate the former. I contend that the main reason why algorithms can be neither autonomous nor accountable is that they lack sentience. Moral zombies and algorithms are incoherent as moral agents because they lack the necessary moral understanding to be morally responsible. To understand what it means to inflict pain on someone, it is necessary to have experiential knowledge of pain. At most, for an algorithm that feels nothing, ‘values’ will be items on a list, possibly prioritised in a certain way according to a number that represents weightiness. But entities that do not feel cannot value, and beings that do not value cannot act for moral reasons.

Conclusion

This paper has argued that moral zombies—creatures that behave like moral agents but lack sentience—are incoherent as moral agents. Only beings who can experience pain and pleasure can understand what it means to inflict pain or cause pleasure, and only those with this moral understanding can be moral agents. What I have dubbed ‘moral zombies’ are relevant because they are similar to algorithms in that they make moral decisions as human beings would—determining who gets which benefits and penalties—without having any concomitant sentience.

There might come a time when AI becomes so sophisticated that robots might possess desires and values of their own. It will not, however, be on account of their computational prowess, but on account of their sentience, which may in turn require some kind of embodiment. At present, we are far from creating sentient algorithms.

When algorithms cause moral havoc, as they often do, we must look to the human beings who designed, programmed, commissioned, implemented, and were supposed to supervise them to assign the appropriate blame. For all their complexity and flair, algorithms are nothing but tools, and moral agents are fully responsible for the tools they create and use.