Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Accountability. Show all posts
Showing posts with label Accountability. Show all posts

Saturday, September 9, 2023

Academics Raise More Than $315,000 for Data Bloggers Sued by Harvard Business School Professor Gino

Neil H. Shah & Claire Yuan
The Crimson
Originally published 1 Sept 23

A group of academics has raised more than $315,000 through a crowdfunding campaign to support the legal expenses of the professors behind data investigation blog Data Colada — who are being sued for defamation by Harvard Business School professor Francesca Gino.

Supporters of the three professors — Uri Simonsohn, Leif D. Nelson, and Joseph P. Simmons — launched the GoFundMe campaign to raise funds for their legal fees after they were named in a $25 million defamation lawsuit filed by Gino last month.

In a series of four blog posts in June, Data Colada gave a detailed account of alleged research misconduct by Gino across four academic papers. Two of the papers were retracted following the allegations by Data Colada, while another had previously been retracted in September 2021 and a fourth is set to be retracted in September 2023.

Organizers wrote on GoFundMe that the fundraiser “hit 2,000 donors and $250K in less than 2 days” and that Simonsohn, Nelson, and Simmons “are deeply moved and grateful for this incredible show of support.”

Simine Vazire, one of the fundraiser’s organizers, said she was “pleasantly surprised” by the reaction throughout academia in support of Data Colada.

“It’s been really nice to see the consensus among the academic community, which is strikingly different than what I see on LinkedIn and the non-academic community,” she said.

Elisabeth M. Bik — a data manipulation expert who also helped organize the fundraiser — credited the outpouring of financial support to solidarity and concern among scientists.

“People are very concerned about this lawsuit and about the potential silencing effect this could have on people who criticize other people’s papers,” Bik said. “I think a lot of people want to support Data Colada for their legal defenses.”

Andrew T. Miltenberg — one of Gino’s attorneys — wrote in an emailed statement that the lawsuit is “not an indictment on Data Colada’s mission.”

Wednesday, August 16, 2023

A Federal Judge Asks: Does the Supreme Court Realize How Bad It Smells?

Michael Ponsor
The New York Times: Opinion
Originally posted 14 July 23

What has gone wrong with the Supreme Court’s sense of smell?

I joined the federal bench in 1984, some years before any of the justices currently on the Supreme Court. Throughout my career, I have been bound and guided by a written code of conduct, backed by a committee of colleagues I can call on for advice. In fact, I checked with a member of that committee before writing this essay.

A few times in my nearly 40 years on the bench, complaints have been filed against me. This is not uncommon for a federal judge. So far, none have been found to have merit, but all of these complaints have been processed with respect, and I have paid close attention to them.

The Supreme Court has avoided imposing a formal ethical apparatus on itself like the one that applies to all other federal judges. I understand the general concern, in part. A complaint mechanism could become a political tool to paralyze the court or a playground for gadflies. However, a skillfully drafted code could overcome this problem. Even a nonenforceable code that the justices formally pledged to respect would be an improvement on the current void.

Reasonable people may disagree on this. The more important, uncontroversial point is that if there will not be formal ethical constraints on our Supreme Court — or even if there will be — its justices must have functioning noses. They must keep themselves far from any conduct with a dubious aroma, even if it may not breach a formal rule.

The fact is, when you become a judge, stuff happens. Many years ago, as a fairly new federal magistrate judge, I was chatting about our kids with a local attorney I knew only slightly. As our conversation unfolded, he mentioned that he’d been planning to take his 10-year-old to a Red Sox game that weekend but their plan had fallen through. Would I like to use his tickets?

Sunday, July 23, 2023

How to Use AI Ethically for Ethical Decision-Making

Demaree-Cotton, J., Earp, B. D., & Savulescu, J.
(2022). American Journal of Bioethics, 22(7), 1–3.

Here is an excerpt:

The  kind  of AI  proposed  by  Meier  and  colleagues  (2022) has  the  fascinating  potential to improve the transparency of ethical decision-making, at least if it is used as a decision aid rather than a decision replacement (Savulescu & Maslen 2015). While artificial intelligence cannot itself engage in the human communicative process of justifying its decisions to patients, the AI they describe (unlike “black-box” AI) makes explicit which values and principles are involved and how much weight they are given.

By contrast, the moral principles or values underlying human moral intuition are not always consciously, introspectively accessible (Cushman, Young, and  Hauser  2006).  While humans sometimes have a fuzzy, intuitive sense of some of the factors that are relevant to their moral judgment, we often have strong moral intuitions without being sure of their source,  or  with- out being clear on precisely how strongly different  factors  played  a  role in  generating  the intuitions.  But  if  clinicians  make use  of  the AI  as a  decision  aid, this  could help  them  to transparently and precisely communicate the actual reasons behind their decision.

This is so even if the AI’s recommendation is ultimately rejected. Suppose, for example, that the AI recommends a course of action, with a certain amount of confidence, and it specifies the exact values or  weights it has assigned  to  autonomy versus  beneficence  in coming  to  this conclusion. Evaluating the recommendation made by the AI could help a committee make more explicit the “black box” aspects  of their own reasoning.  For example, the committee might decide that beneficence should actually be  weighted more heavily in this case than the AI suggests. Being able to understand the reason that their decision diverges from that of the AI gives them the opportunity to offer a further justifying reason as to why they think beneficence should be given more weight;  and  this,  in  turn, could improve the  transparency of their recommendation. 

However, the potential for the kind of AI described in the target article to improve the accuracy of moral decision-making may be more limited. This is so for two reasons. Firstly, whether AI can be expected to outperform human decision-making depends in part on the metrics used to train it. In non-ethical domains, superior accuracy can be achieved because the “verdicts” given to the AI in the training phase are not solely the human judgments that the AI is intended to replace or inform. Consider how AI can learn to detect lung cancer from scans at a superior rate to human radiologists after being trained on large datasets and being “told” which scans show cancer and which ones are cancer-free. Importantly, this training includes cases where radiologists  did  not  recognize  cancer  in  the  early  scans  themselves,  but  where  further information verified the correct diagnosis later on (Ardila et al. 2019). Consequently, these AIs are  able  to  detect  patterns  even in  early  scans  that  are  not  apparent  or  easily detectable  by human radiologists, leading to superior accuracy compared to human performance.  

Saturday, July 22, 2023

Generative AI companies must publish transparency reports

A. Narayanan and S. Kapoor
Knight First Amendment Institute
Originally published 26 June 23

Here is an excerpt:

Transparency reports must cover all three types of harms from AI-generated content

There are three main types of harms that may result from model outputs.

First, generative AI tools could be used to harm others, such as by creating non-consensual deepfakes or child sexual exploitation materials. Developers do have policies that prohibit such uses. For example, OpenAI's policies prohibit a long list of uses, including the use of its models to generate unauthorized legal, financial, or medical advice for others. But these policies cannot have real-world impact if they are not enforced, and due to platforms' lack of transparency about enforcement, we have no idea if they are effective. Similar challenges in ensuring platform accountability have also plagued social media in the past; for instance, ProPublica reporters repeatedly found that Facebook failed to fully remove discriminatory ads from its platform despite claiming to have done so.

Sophisticated bad actors might use open-source tools to generate content that harms others, so enforcing use policies can never be a comprehensive solution. In a recent essay, we argued that disinformation is best addressed by focusing on its distribution (e.g., on social media) rather than its generation. Still, some actors will use tools hosted in the cloud either due to convenience or because the most capable models don’t tend to be open-source. For these reasons, transparency is important for cloud-based generative AI.

Second, users may over-rely on AI for factual information, such as legal, financial, or medical advice. Sometimes they are simply unaware of the tendency of current chatbots to frequently generate incorrect information. For example, a user might ask "what are the divorce laws in my state?" and not know that the answer is unreliable. Alternatively, the user might be harmed because they weren’t careful enough to verify the generated information, despite knowing that it might be inaccurate. Research on automation bias shows that people tend to over-rely on automated tools in many scenarios, sometimes making more errors than when not using the tool.

ChatGPT includes a disclaimer that it sometimes generates inaccurate information. But OpenAI has often touted its performance on medical and legal exams. And importantly, the tool is often genuinely useful at medical diagnosis or legal guidance. So, regardless of whether it’s a good idea to do so, people are in fact using it for these purposes. That makes harm reduction important, and transparency is an important first step.

Third, generated content could be intrinsically undesirable. Unlike the previous types, here the harms arise not because of users' malice, carelessness, or lack of awareness of limitations. Rather, intrinsically problematic content is generated even though it wasn’t requested. For example, Lensa's avatar creation app generated sexualized images and nudes when women uploaded their selfies. Defamation is also intrinsically harmful rather than a matter of user responsibility. It is no comfort to the target of defamation to say that the problem would be solved if every user who might encounter a false claim about them were to exercise care to verify it.


Quick summary: 

The call for transparency reports aims to increase accountability and understanding of the inner workings of generative AI models. By disclosing information about the data used to train the models, the companies can address concerns regarding potential biases and ensure the ethical use of their technology.

Transparency reports could include details about the sources and types of data used, the demographics represented in the training data, any data augmentation techniques applied, and potential biases detected or addressed during model development. This information would enable users, policymakers, and researchers to evaluate the capabilities and limitations of the generative AI systems.

Monday, February 20, 2023

Definition drives design: Disability models and mechanisms of bias in AI technologies

Newman-Griffis, D., et al. (2023).
First Monday, 28(1).
https://doi.org/10.5210/fm.v28i1.12903

Abstract

The increasing deployment of artificial intelligence (AI) tools to inform decision-making across diverse areas including healthcare, employment, social benefits, and government policy, presents a serious risk for disabled people, who have been shown to face bias in AI implementations. While there has been significant work on analysing and mitigating algorithmic bias, the broader mechanisms of how bias emerges in AI applications are not well understood, hampering efforts to address bias where it begins. In this article, we illustrate how bias in AI-assisted decision-making can arise from a range of specific design decisions, each of which may seem self-contained and non-biasing when considered separately. These design decisions include basic problem formulation, the data chosen for analysis, the use the AI technology is put to, and operational design elements in addition to the core algorithmic design. We draw on three historical models of disability common to different decision-making settings to demonstrate how differences in the definition of disability can lead to highly distinct decisions on each of these aspects of design, leading in turn to AI technologies with a variety of biases and downstream effects. We further show that the potential harms arising from inappropriate definitions of disability in fundamental design stages are further amplified by a lack of transparency and disabled participation throughout the AI design process. Our analysis provides a framework for critically examining AI technologies in decision-making contexts and guiding the development of a design praxis for disability-related AI analytics. We put forth this article to provide key questions to facilitate disability-led design and participatory development to produce more fair and equitable AI technologies in disability-related contexts.

Conclusion

The proliferation of artificial intelligence (AI) technologies as behind the scenes tools to support decision-making processes presents significant risks of harm for disabled people. The unspoken assumptions and unquestioned preconceptions that inform AI technology development can serve as mechanisms of bias, building the base problem formulation that guides a technology on reductive and harmful conceptualisations of disability. As we have shown, even when developing AI technologies to address the same overall goal, different definitions of disability can yield highly distinct analytic technologies that reflect contrasting, frequently incompatible decisions in the information to analyse, what analytic process to use, and what the end product of analysis will be. Here we have presented an initial framework to support critical examination of specific design elements in the formulation of AI technologies for data analytics, as a tool to examine the definitions of disability used in their design and the resulting impacts on the technology. We drew on three important historical models of disability that form common foundations for policy, practice, and personal experience today—the medical, social, and relational models—and two use cases in healthcare and government benefits to illustrate how different ways of conceiving of disability can yield technologies that contrast and conflict with one another, creating distinct risks for harm.

Tuesday, January 10, 2023

San Francisco will allow police to deploy robots that kill

Janie Har
Associated Press
Originally posted 29 Nov 22

Supervisors in San Francisco voted Tuesday to give city police the ability to use potentially lethal, remote-controlled robots in emergency situations -- following an emotionally charged debate that reflected divisions on the politically liberal board over support for law enforcement.

The vote was 8-3, with the majority agreeing to grant police the option despite strong objections from civil liberties and other police oversight groups. Opponents said the authority would lead to the further militarization of a police force already too aggressive with poor and minority communities.

Supervisor Connie Chan, a member of the committee that forwarded the proposal to the full board, said she understood concerns over use of force but that “according to state law, we are required to approve the use of these equipments. So here we are, and it’s definitely not a easy discussion.”

The San Francisco Police Department said it does not have pre-armed robots and has no plans to arm robots with guns. But the department could deploy robots equipped with explosive charges “to contact, incapacitate, or disorient violent, armed, or dangerous suspect” when lives are at stake, SFPD spokesperson Allison Maxie said in a statement.

“Robots equipped in this manner would only be used in extreme circumstances to save or prevent further loss of innocent lives,” she said.

Supervisors amended the proposal Tuesday to specify that officers could use robots only after using alternative force or de-escalation tactics, or concluding they would not be able to subdue the suspect through those alternative means. Only a limited number of high-ranking officers could authorize use of robots as a deadly force option.

Thursday, January 5, 2023

The Supreme Court Needs Real Oversight

Glen Fine
The Atlantic
Originally posted 5 DEC 22

Here is an excerpt:

The lack of ethical rules that bind the Court is the first problem—and the easier one to address. The Code of Conduct for United States Judges, promulgated by the federal courts’ Judicial Conference, “prescribes ethical norms for federal judges as a means to preserve the actual and apparent integrity of the federal judiciary.” The code covers judicial conduct both on and off the bench, including requirements that judges act at all times to promote public confidence in the integrity and impartiality of the judiciary. But this code applies only to lower-level federal judges, not to the Supreme Court, which has not issued ethical rules that apply to its own conduct. The Court should explicitly adopt this code or a modified one.

Chief Justice Roberts has noted that Supreme Court justices voluntarily consult the Code of Conduct and other ethical rules for guidance. He has also pointed out that the justices can seek ethical advice from a variety of sources, including the Court’s Legal Office, the Judicial Conference’s Committee on Codes of Conduct, and their colleagues. But this is voluntary, and each justice decides independently whether and how ethical rules apply in any particular case. No one—including the chief justice—has the ability to alter a justice’s self-judgment.

Oversight of the judiciary is a more difficult issue, involving separation-of-powers concerns. I was the inspector general of the Department of Justice for 11 years and the acting inspector general of the Department of Defense for four years; I saw the importance and challenges of oversight in two of the most important government agencies. I also experienced the difficulties in conducting complex investigations of alleged misconduct, including leak investigations. But as I wrote in a Brookings Institution article this past May after the Dobbs leak, the Supreme Court does not have the internal capacity to effectively investigate such leaks, and it would benefit from a skilled internal investigator, like an inspector general, to help oversee the Court and the judiciary.

Another example of the Court’s ineffective self-policing and lack of transparency involves its recusal decisions. For example, Justice Thomas’s wife, Virginia Thomas, has argued that the 2020 presidential election was stolen, sent text messages to former White House Chief of Staff Mark Meadows urging him and the White House to seek to overturn the election, and expressed support for the pro-Trump January 6 rally on the Ellipse. Nevertheless, Justice Thomas has not recused himself in cases relating to the subsequent attack on the Capitol.

Notably, Thomas was the only justice to dissent from the Court’s decision not to block the release to the January 6 committee of White House records related to the attack, which included his wife’s texts. Some legal experts have argued that this is a clear instance where recusal should have occurred. Statute 28 U.S.C. 455 requires federal judges, including Supreme Court justices, to recuse themselves from a case when they know that their spouse has any interest that could be substantially affected by the outcome. In addition, the statute requires justices and judges to disqualify themselves in any proceeding in which their impartiality may reasonably be questioned.

Tuesday, November 29, 2022

The Supreme Court has lost its ethical compass. Can it find one fast?

Ruth Marcus
The Washington Post
Originally published 23 Nov 22

The Supreme Court must get its ethics act together, and Chief Justice John G. Roberts Jr. needs to take the lead. After a string of embarrassments, the justices should finally subject themselves to the kind of rules that govern other federal judges and establish a standard for when to step aside from cases — one that is more stringent than simply leaving it up to the individual justice to decide.

Recent episodes are alarming and underscore the need for quick action to help restore confidence in the institution.

Last week, the Supreme Court wisely rebuffed an effort by Arizona GOP chair Kelli Ward to prevent the House Jan. 6 committee — the party in this case — from obtaining her phone records. The court’s brief order noted that Justice Clarence Thomas, along with Justice Samuel A. Alito Jr., would have sided with Ward.

Thomas’s involvement, though it didn’t affect the outcome of the dispute, is nothing short of outrageous. Federal law already requires judges, including Supreme Court justices, to step aside from involvement in any case in which their impartiality “might reasonably be questioned.”

Perhaps back in January, when he was the only justice to disagree when the court refused to grant former president Donald Trump’s bid to stop his records from being turned over to the Jan. 6 committee, Thomas didn’t realize the extent of his wife’s involvement with disputing the election results. (I’m being kind here: Ginni Thomas had signed a letter the previous month calling on House Republicans to expel Reps. Liz Cheney of Wyoming and Adam Kinzinger of Illinois from the House Republican Conference for participating in an “overtly partisan political persecution.”)

But here’s what we know now, and Justice Thomas does, too: The Jan 6. committee has subpoenaed and interviewed his wife. We — and he — know that she contacted 29 Arizona lawmakers, urging them to “fight back against fraud” and choose a “clean slate of electors” after the 2020 election.

Some recusal questions are close. Not this one. Did the chief justice urge Thomas to recuse? He should have. This will sound unthinkable, but if Roberts asked and Thomas refused, maybe it’s time the chief, or other justices, to publicly note their disagreement.

(cut)

One obvious step is to follow the ethics rules that apply to other federal judges, perhaps adapting them to the particular needs of the high court. That would send an important — and overdue — message that the justices are not a law unto themselves. It’s symbolic, but symbolism matters.

Tuesday, August 2, 2022

How to end cancel culture

Jennifer Stefano
Philadelphia Inquirer
Originally posted 25 JUL 22

Here is an excerpt:

Radical politics requires radical generosity toward those with whom we disagree — if we are to remain a free and civil society that does not descend into violence. Are we not a people defined by the willingness to spend our lives fighting against what another has said, but give our lives to defend her right to say it? Instead of being hypersensitive fragilistas, perhaps we could give that good old-fashioned American paradox a try again.

But how? Start by engaging in the democratic process by first defending people’s right to be awful. Then use that right to point out just how awful someone’s words or deeds are. Accept that you have freedom of speech, not freedom from offense. A free society best holds people accountable in the arena of ideas. When we trade debate for the dehumanizing act of cancellation, we head down a dangerous path — even if the person who would be canceled has behaved in a dehumanizing way toward others.

Canceling those with opinions most people deem morally wrong and socially unacceptable (racism, misogyny) leads to a permissiveness in simply labeling speech we do not like as those very things without any reason or recourse. Worse, cancel culture is creating a society where dissenting or unpopular opinions become a risk. Canceling isn’t about debate but dehumanizing.

Speech is free. The consequences are not. Actress Constance Wu attempted suicide after she was canceled in 2019 for publicly tweeting she didn’t love her job on a hit TV show. Her words harmed no one, but she was publicly excoriated for them. Private DMs from her fellow Asian actresses telling her she was a “blight” on the Asian American community made her believe she didn’t deserve to live. Wu didn’t lose her job for her words, but she nearly lost her life.

Cancel culture does more than make the sinner pay a penance. It offers none of the healing redemption necessary for a free and civil society. In America, we have always believed in second chances. It is the basis for the bipartisan work on issues like criminal justice reform. Our achievements here have been a bright spot.

We as a civil society want to give the formerly incarcerated a second chance. How about doing the same for each other?

Monday, June 13, 2022

San Diego doctor who smuggled hydroxychloroquine into US, sold medication as a COVID-19 cure sentenced

Hope Sloop
KSWB-TV San Diego
Originally posted 29 MAY 22

A San Diego doctor was sentenced Friday to 30 days of custody and one year of house arrest for attempting to smuggle hydroxychloroquine into the U.S. and sell COVID-19 "treatment kits" at the beginning of the pandemic.  

According to officials with the U.S. Department of Justice, Jennings Ryan Staley attempted to sell what he described as a "medical cure" for the coronavirus, which was really hydroxychloroquine powder that the physician had imported in from China by mislabeling the shipping container as "yam extract." Staley had attempted to replicate this process with another seller at one point, as well, but the importer told the San Diego doctor that they "must do it legally." 

Following the arrival of his shipment of the hydroxychloroquine powder, Staley solicited investors to help fund his operation to sell the filled capsules as a "medical cure" for COVID-19. The SoCal doctor told potential investors that he could triple their money within 90 days.  

Staley also told investigators via his plea agreement that he had written false prescriptions for hydroxychloroquine, using his associate's name and personal details without the employee's consent or knowledge.  

During an undercover operation, an agent purchased six of Staley's "treatment kits" for $4,000 and, during a recorded phone call, the doctor bragged about the efficacy of the kits and said, "I got the last tank of . . . hydroxychloroquine, smuggled out of China."  

Thursday, February 10, 2022

Four Responsibility Gaps with Artificial Intelligence: Why they Matter and How to Address them

Santoni de Sio, F., Mecacci, G. 
Philos. Technol. 34, 1057–1084 (2021). 
https://doi.org/10.1007/s13347-021-00450-x

Abstract

The notion of “responsibility gap” with artificial intelligence (AI) was originally introduced in the philosophical debate to indicate the concern that “learning automata” may make more difficult or impossible to attribute moral culpability to persons for untoward events. Building on literature in moral and legal philosophy, and ethics of technology, the paper proposes a broader and more comprehensive analysis of the responsibility gap. The responsibility gap, it is argued, is not one problem but a set of at least four interconnected problems – gaps in culpability, moral and public accountability, active responsibility—caused by different sources, some technical, other organisational, legal, ethical, and societal. Responsibility gaps may also happen with non-learning systems. The paper clarifies which aspect of AI may cause which gap in which form of responsibility, and why each of these gaps matter. It proposes a critical review of partial and non-satisfactory attempts to address the responsibility gap: those which present it as a new and intractable problem (“fatalism”), those which dismiss it as a false problem (“deflationism”), and those which reduce it to only one of its dimensions or sources and/or present it as a problem that can be solved by simply introducing new technical and/or legal tools (“solutionism”). The paper also outlines a more comprehensive approach to address the responsibility gaps with AI in their entirety, based on the idea of designing socio-technical systems for “meaningful human control", that is systems aligned with the relevant human reasons and capacities.

(cut)

The Tracing Conditions and its Payoffs for Responsibility

Unlike proposals based on new forms of legal liability, MHC (Meaningful Human Control) proposes that socio-technical systems are also systematically designed to avoid gaps in moral culpability, accountability, and active responsibility. The “tracing condition” proposes that a system can remain under MHC only in the presence of a solid alignment between the system and the technical, motivational, moral capacities of the relevant agents involved, with different roles, in the design, control, and use of the system. The direct goal of this condition is promoting a fair distribution of moral culpability, thereby avoiding two undesired results: first, scapegoating, i.e. agents being held culpable without having a fair capacity to avoid wrongdoing (Elish, 2019): in the example of the automated driving systems above, for instance, the drivers’ relevant technical and motivational capacities not being sufficiently studied and trained. Second, impunity for avoidable accidents, i.e. culpability gaps: the impossibility to legitimately blame anybody as no individual agent possesses all the relevant capacities, e.g. the managers/designers having the technical capacity but not the moral motivation to avoid accidents and the drivers having the motivation but not the skills. The tracing condition also helps addressing accountability and active responsibility gaps. If a person or organisation should be morally or publicly accountable, then they must also possess the specific capacity to discharge this duty: according to another example discussed above, if a doctor has to remain accountable to their patients for her decisions, then she should maintain the capacity and motivation to understand the functioning of the AI system she uses and to explain her decision to the patients.

Monday, July 26, 2021

Do doctors engaging in advocacy speak for themselves or their profession?

Elizabeth Lanphier
Journal of Medical Ethics Blog
Originally posted 17 June 21

Here is an excerpt:

My concern is not the claim that expertise should be shared. (It should!) Nor do I think there is any neat distinction between physician responsibilities for individual health and public health. But I worry that when Strous and Karni alternately frame physician duties to “speak out” as individual duties and collective ones, they collapse necessary distinctions between the risks, benefits, and demands of these two types of obligations.

Many of us have various role-based individual responsibilities. We can have obligations as a parent, as a citizen, or as a professional. Having an individual responsibility as a physician involves duties to your patients, but also general duties to care in the event you are in a situation in which your expertise is needed (the “is there a doctor on this flight?” scenario).

Collective responsibility, on the other hand, is when a group has a responsibility as a group. The philosophical literature debates hard to resolve questions about what it means to be a “group,” and how groups come to have or discharge responsibilities. Collective responsibility raises complicated questions like: If physicians have a collective responsibility to speak out during the COVID-19 pandemic, does every physician has such an obligation? Does any individual physician?

Because individual obligations attribute duties to specific persons responsible for carrying them out in ways collective duties tend not to, I why individual physician obligations are attractive. But this comes with risks. One risk is that a physician speaks out as an individual, appealing to the authority of their medical credentials, but not in alignment with their profession.

In my essay I describe a family physician inviting his extended family for a holiday meal during a peak period of SARS-CoV-2 transmission because he didn’t think COVID-19 was a “big deal.”

More infamously, Dr. Scott Atlas served as Donald J. Trump’s coronavirus advisor, and although he is a physician, he did not have experience in public health, infectious disease, or critical care medicine applicable to COVID-19. Atlas was a physician speaking as a physician, but he routinely promoted views starkly different than those of physicians with expertise relevant to the pandemic, and the guidance coming from scientific and medical communities.

Tuesday, May 18, 2021

Moderators of The Liking Bias in Judgments of Moral Character

Bocian, K. Baryla, W. & Wojciszke, B. 
Personality and Social Psychology Bulletin. 
(2021)

Abstract 

Previous research found evidence for a liking bias in moral character judgments because judgments of liked people are higher than those of disliked or neutral ones. The present article sought conditions moderating this effect. In Study 1 (N = 792), the impact of the liking bias on moral character judgments was strongly attenuated when participants were educated that attitudes bias moral judgments. In Study 2 (N = 376), the influence of liking on moral character attributions was eliminated when participants were accountable for the justification of their moral judgments. Overall, these results suggest that even though liking biases moral character attributions, this bias might be reduced or eliminated when deeper information processing is required to generate judgments of others’ moral character. Keywords: moral judgments, moral character, attitudes, liking bias, accountability.

General Discussion

In this research, we sought to replicate the past results that demonstrated the influence of liking on moral character judgments, and we investigated conditions that could limit this influence. We demonstrated that liking elicited by similarity (Study 1) and mimicry (Study 2) biases the perceptions of another person’s moral character. Thus, we corroborated previous findings by Bocian et al. (2018), who found that attitudes bias moral judgments. More importantly, we showed conditions that moderate the liking bias. Specifically, in Study 1, we found evidence that forewarning participants that liking can bias moral character judgments weaken the liking bias two times. In Study 2, we demonstrated that the liking bias was eliminated when we made participants accountable for their moral decisions. 

By systematically examining the conditions that reduce the liking influences on moral character attributions, we built on and extended the past work in the area of moral cognition and biases reduction. First, while past studies have focused on the impact of accountability on the fundamental attribution error (Tetlock, 1985), overconfidence (Tetlock & Kim, 1987), or order of information (Schadewald & Limberg, 1992), we examined the effectiveness of accountability in debiasing moral judgments. Thus, we demonstrated that biased moral judgments could be effectively corrected when people are obliged to justify their judgments to others. Second, we showed that educating people that attitudes might bias their moral judgments, to some extent, effectively helped them debiased their moral character judgments. We thus extended the past research on the effectiveness of forewarning people of biases in social judgment and decision-making (Axt et al., 2018; Hershberger et al., 1997) to biases in moral judgments. 

Saturday, May 15, 2021

Moral zombies: why algorithms are not moral agents

VĂ©liz, C. 
AI & Soc (2021). 

Abstract

In philosophy of mind, zombies are imaginary creatures that are exact physical duplicates of conscious subjects for whom there is no first-personal experience. Zombies are meant to show that physicalism—the theory that the universe is made up entirely out of physical components—is false. In this paper, I apply the zombie thought experiment to the realm of morality to assess whether moral agency is something independent from sentience. Algorithms, I argue, are a kind of functional moral zombie, such that thinking about the latter can help us better understand and regulate the former. I contend that the main reason why algorithms can be neither autonomous nor accountable is that they lack sentience. Moral zombies and algorithms are incoherent as moral agents because they lack the necessary moral understanding to be morally responsible. To understand what it means to inflict pain on someone, it is necessary to have experiential knowledge of pain. At most, for an algorithm that feels nothing, ‘values’ will be items on a list, possibly prioritised in a certain way according to a number that represents weightiness. But entities that do not feel cannot value, and beings that do not value cannot act for moral reasons.

Conclusion

This paper has argued that moral zombies—creatures that behave like moral agents but lack sentience—are incoherent as moral agents. Only beings who can experience pain and pleasure can understand what it means to inflict pain or cause pleasure, and only those with this moral understanding can be moral agents. What I have dubbed ‘moral zombies’ are relevant because they are similar to algorithms in that they make moral decisions as human beings would—determining who gets which benefits and penalties—without having any concomitant sentience.

There might come a time when AI becomes so sophisticated that robots might possess desires and values of their own. It will not, however, be on account of their computational prowess, but on account of their sentience, which may in turn require some kind of embodiment. At present, we are far from creating sentient algorithms.

When algorithms cause moral havoc, as they often do, we must look to the human beings who designed, programmed, commissioned, implemented, and were supposed to supervise them to assign the appropriate blame. For all their complexity and flair, algorithms are nothing but tools, and moral agents are fully responsible for the tools they create and use.

Monday, February 1, 2021

Does civility pay?

Porath, C. L., & Gerbasi, A. (2015). 
Organizational Dynamics, 44(4), 281–286.

Abstract 

Being nice may bring you friends, but does it help or harm you in your career? After all, research by Timothy Judge and colleagues shows a negative relationship between a person’s agreeableness and income. Research by Amy Cuddy has shown that warm people are perceived to be less competent, which is likely to have negative career implications. People who buck social rules by treating people rudely and getting away with it tend to garner power. If you are civil you may be perceived as weak, and ignored or taken advantage. Being kind or considerate may be hazardous to your self-esteem, goal achievement, influence, career, and income.  Over the last two decades we have studied the costs of incivility–—and the benefits of civility. We’ve polled tens of thousands of workers across industries around the world about how they’re treated on the job and the effects. The costs of incivility are enormous. Organizations and their employees would be much more likely to thrive if employees treated each other respectfully.  Many see civility as an investment and are skeptical about the potential returns. Porath surveyed of hundreds across organizations spanning more than 17 industries and found that a quarter believe that they will be less leader-like, and nearly 40 percent are afraid that they’ll be taken advantage of if they’re nice at work. Nearly half think that is better to flex your muscles to garner power.  In network studies of a biotechnology firm and international MBAs, along with surveys, and experiments, we address whether civility pays. In this article we discuss our findings and propose recommendations for leaders and organizations.

(cut)

Conclusions

Civility pays. It is a potent behavior you want to master to enhance your influence and effectiveness. It is unique in the sense that it elicits both warmth and competence–—the two characteristics that account for over 90 percent of positive impressions. By being respectful you enhance–—not deter–—career opportunities and effectiveness.

Sunday, December 27, 2020

Do criminals freely decide to commit offences? How the courts decide?

J. Kennett & A. McCay
The Conversation
Originally published 15 OCT 20

Here is an excerpt:

Expert witnesses were reportedly divided on whether Gargasoulas had the capacity to properly participate in his trial, despite suffering from paranoid schizophrenia and delusions.

A psychiatrist for the defence said Gargasoulas’ delusional belief system “overwhelms him”; the psychiatrist expressed concern Gargasoulas was using the court process as a platform to voice his belief he is the messiah.

A second forensic psychiatrist agreed Gargasoulas was “not able to rationally enter a plea”.

However, a psychologist for the prosecution assessed him as fit and the prosecution argued there was evidence from recorded phone calls that he was capable of rational thought.

Notwithstanding the opinion of the majority of expert witnesses, the jury found Gargasoulas was fit to stand trial, and later he was convicted and sentenced to life imprisonment.

Working from media reports, it is difficult to be sure precisely what happened in court, and we cannot know why the jury favoured the evidence suggesting he was fit to stand trial. However, it is interesting to consider whether research into the psychology of blame and punishment can shed any light on their decision.

Questions of consequence

Some psychologists argue judgements of blame are not always based on a balanced assessment of free will or rational control, as the law presumes. Sometimes we decide how much control or freedom a person possessed based upon our automatic negative responses to harmful consequences.

As the psychologist Mark Alicke says:
we simply don’t want to excuse people who do horrible things, regardless of how disordered their cognitive states may be.
When a person has done something very bad, we are motivated to look for evidence that supports blaming them and to downplay evidence that might excuse them by showing that they lacked free will.

Thursday, December 17, 2020

AI and the Ethical Conundrum: How organizations can build ethically robust AI systems and gain trust

Capgemini Research Institute

In the wake of the COVID-19 crisis, our reliance on AI has skyrocketed. Today more than ever before, we look to AI to help us limit physical interactions, predict the next wave of the pandemic, disinfect our healthcare facilities and even deliver our food. But can we trust it?

In the latest report from the Capgemini Research Institute – AI and the Ethical Conundrum: How organizations can build ethically robust AI systems and gain trust – we surveyed over 800 organizations and 2,900 consumers to get a picture of the state of ethics in AI today. We wanted to understand what organizations can do to move to AI systems that are ethical by design, how they can benefit from doing so, and the consequences if they don’t. We found that while customers are becoming more trusting of AI-enabled interactions, organizations’ progress in ethical dimensions is underwhelming. And this is dangerous because once violated, trust can be difficult to rebuild.

Ethically sound AI requires a strong foundation of leadership, governance, and internal practices around audits, training, and operationalization of ethics. Building on this foundation, organizations have to:
  1. Clearly outline the intended purpose of AI systems and assess their overall potential impact
  2. Proactively deploy AI to achieve sustainability goals
  3. Embed diversity and inclusion principles proactively throughout the lifecycle of AI systems for advancing fairness
  4. Enhance transparency with the help of technology tools, humanize the AI experience and ensure human oversight of AI systems
  5. Ensure technological robustness of AI systems
  6. Empower customers with privacy controls to put them in charge of AI interactions.
For more information on ethics in AI, download the report.

Tuesday, August 4, 2020

When a Patient Regrets Having Undergone a Carefully and Jointly Considered Treatment Plan, How Should Her Physician Respond?

L. V. Selby and others
AMA J Ethics. 2020;22(5):E352-357.
doi: 10.1001/amajethics.2020.352.

Abstract

Shared decision making is best utilized when a decision is preference sensitive. However, a consequence of choosing between one of several reasonable options is decisional regret: wishing a different decision had been made. In this vignette, a patient chooses mastectomy to avoid radiotherapy. However, postoperatively, she regrets the more disfiguring operation and wishes she had picked the other option: lumpectomy and radiation. Although the physician might view decisional regret as a failure of shared decision making, the physician should reflect on the process by which the decision was made. If the patient’s wishes and values were explored and the decision was made in keeping with those values, decisional regret should be viewed as a consequence of decision making, not necessarily as a failure of shared decision making.

(cut)

Commentary

This case vignette highlights decisional regret, which is one of the possible consequences of the patient decision-making process when there are multiple treatment options available. Although the process of shared decision making, which appears to have been carried out in this case, is utilized to help guide the patient and the physician to come to a mutually acceptable and optimal health care decision, it clearly does not always obviate the risk of a patient’s regretting that decision after treatment. Ironically, the patient might end up experiencing more regret after participating in a decision-making process in which more rather than fewer options are presented and in which the patient perceives the process as collaborative rather than paternalistic. For example, among men with prostate cancer, those with lower levels of decisional involvement had lower levels of decisional regret. We argue that decisional regret does not mean that shared decision making is not best practice, even though it can result in patients being reminded of their role in the decision and associated personal regret with that decision.

The info is here.

Thursday, May 14, 2020

Is justice blind or myopic? An examination of the effects of meta-cognitive myopia and truth bias on mock jurors and judges

M. Pantazi, O. Klein, & M. Kissine
Judgment and Decision Making, 
Vol. 15, No. 2, March 2020, pp. 214-229

Abstract

Previous studies have shown that people are truth-biased in that they tend to believe the information they receive, even if it is clearly flagged as false. The truth bias has been recently proposed to be an instance of meta-cognitive myopia, that is, of a generalized human insensitivity towards the quality and correctness of the information available in the environment. In two studies we tested whether meta-cognitive myopia and the ensuing truth bias may operate in a courtroom setting. Based on a well-established paradigm in the truth-bias literature, we asked mock jurors (Study 1) and professional judges (Study 2) to read two crime reports containing aggravating or mitigating information that was explicitly flagged as false. Our findings suggest that jurors and judges are truth-biased, as their decisions and memory about the cases were affected by the false information. We discuss the implications of the potential operation of the truth bias in the courtroom, in the light of the literature on inadmissible and discredible evidence, and make some policy suggestions.

From the Discussion:

Fortunately, the judiciary system is to some extent shielded by intrusions of illegitimate evidence, since objections are most often raised before a witness’s answer or piece of evidence is presented in court. Therefore, most of the time, inadmissible or false evidence is prevented from entering the fact-finders’ mental representations of a case in the first place. Nevertheless, objections can also be raised after a witnesses’ response has been given. Such objections may not actually protect the fact-finders from the information that has already been presented. An important question that remains open from a policy perspective is therefore how we are to safeguard the rules of evidence, given the fact-finders’ inability to take such meta-information into account.

The research is here.

Sunday, May 3, 2020

Complicit silence in medical malpractice

Editorial
Volume 395, Issue 10223, p. 467
February 15, 2020

Clinicians and health-care managers displayed “a capacity for willful blindness” that allowed Ian Paterson to hide in plain sight—that is the uncomfortable opening statement of the independent inquiry into Paterson's malpractice, published on Feb 4, 2020. Paterson worked as a consultant surgeon from 1993 to 2011 in both private and National Health Service hospitals in West Midlands, UK. During that period, he treated thousands of patients, many of whom had surgery. Paterson demonstrated an array of abhorrent and unsafe activities over this time, including exaggerating patients' diagnoses to coerce them into having surgery, performing his own version of a mastectomy, which goes against internationally agreed oncological principles, and inappropriate conduct towards patients and staff.

The inquiry makes a range of valuable recommendations that cover regulatory reform, corporate accountability, information for patients, informed consent, complaints, and clinical indemnity. The crucial message is that these reforms must occur across both the NHS and the private sector and must be implemented earnestly and urgently. But many of the issues in the Paterson case cannot be regulated and flow from the murky waters of medical professionalism. At times during the 87 pages of patient testimony, patients suggested in hindsight they could see that other clinicians knew there was a problem with Paterson but did not say anything. The hurt and disappointment that patients felt with the medical profession are chilling.

The info is here.