Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Accountability. Show all posts
Showing posts with label Accountability. Show all posts

Tuesday, June 17, 2025

Ethical implication of artificial intelligence (AI) adoption in financial decision making.

Owolabi, O. S., Uche, P. C., et al. (2024).
Computer and Information Science, 17(1), 49.

Abstract

The integration of artificial intelligence (AI) into the financial sector has raised ethical concerns that need to be addressed. This paper analyzes the ethical implications of using AI in financial decision-making and emphasizes the importance of an ethical framework to ensure its fair and trustworthy deployment. The study explores various ethical considerations, including the need to address algorithmic bias, promote transparency and explainability in AI systems, and adhere to regulations that protect equity, accountability, and public trust. By synthesizing research and empirical evidence, the paper highlights the complex relationship between AI innovation and ethical integrity in finance. To tackle this issue, the paper proposes a comprehensive and actionable ethical framework that advocates for clear guidelines, governance structures, regular audits, and collaboration among stakeholders. This framework aims to maximize the potential of AI while minimizing negative impacts and unintended consequences. The study serves as a valuable resource for policymakers, industry professionals, researchers, and other stakeholders, facilitating informed discussions, evidence-based decision-making, and the development of best practices for responsible AI integration in the financial sector. The ultimate goal is to ensure fairness, transparency, and accountability while reaping the benefits of AI for both the financial sector and society.

Here are some thoughts:

This paper explores the ethical implications of using artificial intelligence (AI) in financial decision-making.  It emphasizes the necessity of an ethical framework to ensure AI is used fairly and responsibly.  The study examines ethical concerns like algorithmic bias, the need for transparency and explainability in AI systems, and the importance of regulations that protect equity, accountability, and public trust.  The paper also proposes a comprehensive ethical framework with guidelines, governance structures, regular audits, and stakeholder collaboration to maximize AI's potential while minimizing negative impacts.

These themes are similar to concerns in using AI in the practice of psychology. Also, psychologists may need to be aware of these issues for their own financial and wealth management.

Sunday, May 4, 2025

Navigating LLM Ethics: Advancements, Challenges, and Future Directions

Jiao, J., Afroogh, S., Xu, Y., & Phillips, C. (2024).
arXiv (Cornell University).

Abstract

This study addresses ethical issues surrounding Large Language Models (LLMs) within the field of artificial intelligence. It explores the common ethical challenges posed by both LLMs and other AI systems, such as privacy and fairness, as well as ethical challenges uniquely arising from LLMs. It highlights challenges such as hallucination, verifiable accountability, and decoding censorship complexity, which are unique to LLMs and distinct from those encountered in traditional AI systems. The study underscores the need to tackle these complexities to ensure accountability, reduce biases, and enhance transparency in the influential role that LLMs play in shaping information dissemination. It proposes mitigation strategies and future directions for LLM ethics, advocating for interdisciplinary collaboration. It recommends ethical frameworks tailored to specific domains and dynamic auditing systems adapted to diverse contexts. This roadmap aims to guide responsible development and integration of LLMs, envisioning a future where ethical considerations govern AI advancements in society.

Here are some thoughts:

This study examines the ethical issues surrounding Large Language Models (LLMs) within artificial intelligence, addressing both common ethical challenges shared with other AI systems, such as privacy and fairness, and the unique ethical challenges specific to LLMs.  The authors emphasize the distinct challenges posed by LLMs, including hallucination, verifiable accountability, and the complexities of decoding censorship.  The research underscores the importance of tackling these complexities to ensure accountability, reduce biases, and enhance transparency in how LLMs shape information dissemination.  It also proposes mitigation strategies and future directions for LLM ethics, advocating for interdisciplinary collaboration, ethical frameworks tailored to specific domains, and dynamic auditing systems adapted to diverse contexts, ultimately aiming to guide the responsible development and integration of LLMs. 

Thursday, March 20, 2025

As AI nurses reshape hospital care, human nurses are pushing back

Perrone, M. (2025, March 16).
AP News.

The next time you’re due for a medical exam you may get a call from someone like Ana: a friendly voice that can help you prepare for your appointment and answer any pressing questions you might have.

With her calm, warm demeanor, Ana has been trained to put patients at ease — like many nurses across the U.S. But unlike them, she is also available to chat 24-7, in multiple languages, from Hindi to Haitian Creole.

That’s because Ana isn’t human, but an artificial intelligence program created by Hippocratic AI, one of a number of new companies offering ways to automate time-consuming tasks usually performed by nurses and medical assistants.

It’s the most visible sign of AI’s inroads into health care, where hundreds of hospitals are using increasingly sophisticated computer programs to monitor patients’ vital signs, flag emergency situations and trigger step-by-step action plans for care — jobs that were all previously handled by nurses and other health professionals.

Hospitals say AI is helping their nurses work more efficiently while addressing burnout and understaffing. But nursing unions argue that this poorly understood technology is overriding nurses’ expertise and degrading the quality of care patients receive.

The info is linked above.

Here are some thoughts:

The article details the increasing use of AI in healthcare to automate nursing tasks, sparking union concerns about patient safety and the risk of AI overriding human expertise. Licensing boards cannot license AI products because licensing is fundamentally designed for individuals, not tools. It establishes accountability based on demonstrated competence, which is difficult to apply to AI due to complex liability issues and the challenge of tracing AI outputs to specific actions. AI lacks the inherent personhood and professional responsibility that licensing demands, making it unaccountable for harm.

Sunday, February 16, 2025

Humor as a window into generative AI bias

Saumure, R., De Freitas, J., & Puntoni, S. (2025).
Scientific Reports, 15(1).

Abstract

A preregistered audit of 600 images by generative AI across 150 different prompts explores the link between humor and discrimination in consumer-facing AI solutions. When ChatGPT updates images to make them “funnier”, the prevalence of stereotyped groups changes. While stereotyped groups for politically sensitive traits (i.e., race and gender) are less likely to be represented after making an image funnier, stereotyped groups for less politically sensitive traits (i.e., older, visually impaired, and people with high body weight groups) are more likely to be represented.

Here are some thoughts:

Here is a novel method developed to uncover biases in AI systems, revealing some unexpected results. The research highlights how AI models, despite their advanced capabilities, can exhibit biases that are not immediately apparent. The new approach involves probing the AI's decision-making processes to identify hidden prejudices, which can have significant implications for fairness and ethical AI deployment.

This research underscores a critical challenge in the field of artificial intelligence: ensuring that AI systems operate ethically and fairly. As AI becomes increasingly integrated into industries such as healthcare, finance, criminal justice, and hiring, the potential for biased decision-making poses significant risks. Biases in AI can perpetuate existing inequalities, reinforce stereotypes, and lead to unfair outcomes for individuals or groups. This study highlights the importance of prioritizing ethical AI development to build systems that are not only intelligent but also just and equitable.

To address these challenges, bias detection should become a standard practice in AI development workflows. The novel method introduced in this research provides a promising framework for identifying hidden biases, but it is only one piece of the puzzle. Organizations should integrate multiple bias detection techniques, encourage interdisciplinary collaboration, and leverage external audits to ensure their AI systems are as fair and transparent as possible.

Tuesday, October 22, 2024

Pennsylvania health system agrees to $65 million settlement after hackers leaked nude photos of cancer patients

Sean Lyngass
CNN.com
Originally posted 23 Sept 24

A Pennsylvania health care system this month agreed to pay $65 million to victims of a February 2023 ransomware attack after hackers posted nude photos of cancer patients online, according to the victims’ lawyers.

It’s the largest settlement of its kind in terms of per-patient compensation for victims of a cyberattack, according to Saltz Mongeluzzi Bendesky, a law firm that for the plaintiffs.

The settlement, which is subject to approval by a judge, is a warning to other big US health care providers that the most sensitive patient records they hold are of enormous value to both hackers and the patients themselves, health care cyber experts told CNN. Eighty percent of the $65-million settlement is set aside for victims whose nude photos were published online.

The settlement “shifts the legal, insurance and adversarial ecosystem,” said Carter Groome, chief executive of cybersecurity firm First Health Advisory. “If you’re protecting health data as a crown jewel — as you should be — images or photos are going to need another level of compartmentalized protection.”

It’s a potentially continuous cycle where hackers increasingly seek out the most sensitive patient data to steal, and health care providers move to settle claims out of courts to avoid “ongoing reputational harm,” Groome told CNN.

According to the lawsuit, a cybercriminal gang stole nude photos of cancer patients last year from Lehigh Valley Health Network, which comprises 15 hospitals and health centers in eastern Pennsylvania. The hackers demanded a ransom payment and when Lehigh refused to pay, they leaked the photos online.

The lawsuit, filed on behalf of a Pennsylvania woman and others whose nude photos were posted online, said that Lehigh Valley Health Network needed to be held accountable “for the embarrassment and humiliation” it had caused plaintiffs.

“Patient, physician, and staff privacy is among our top priorities, and we continue to enhance our defenses to prevent incidents in the future,” Lehigh Valley Health Network said in a statement to CNN on Monday.


Here are some thoughts:

The ransomware attack on Lehigh Valley Health Network raises significant ethical and healthcare concerns. The exposure of nude photos of cancer patients is a profound breach of trust and privacy, causing significant emotional distress and psychological harm. Healthcare providers have a duty of care to protect patient data and must be held accountable for their failure to do so. The decision to pay a ransom is ethically complex, as it can incentivize further attacks and potentially jeopardize patient safety. The frequency and severity of ransomware attacks highlight the urgent need for stronger cybersecurity measures in the healthcare sector. By addressing these ethical and practical considerations, healthcare organizations can better safeguard patient information and ensure the delivery of high-quality care.

Wednesday, October 9, 2024

The rise of checkbox AI ethics: a review

Kijewski, S., Ronchi, E., & Vayena, E. (2024).
AI And Ethics.

Abstract
The rapid advancement of artificial intelligence (AI) sparked the development of principles and guidelines for ethical AI by a broad set of actors. Given the high-level nature of these principles, stakeholders seek practical guidance for their implementation in the development, deployment and use of AI, fueling the growth of practical approaches for ethical AI. This paper reviews, synthesizes and assesses current practical approaches for AI in health, examining their scope and potential to aid organizations in adopting ethical standards. We performed a scoping review of existing reviews in accordance with the PRISMA extension for scoping reviews (PRISMA-ScR), systematically searching databases and the web between February and May 2023. A total of 4284 documents were identified, of which 17 were included in the final analysis. Content analysis was performed on the final sample. We identified a highly heterogeneous ecosystem of approaches and a diverse use of terminology, a higher prevalence of approaches for certain stages of the AI lifecycle, reflecting the dominance of specific stakeholder groups in their development, and several barriers to the adoption of approaches. These findings underscore the necessity of a nuanced understanding of the implementation context for these approaches and that no one-size-fits-all approach exists for ethical AI. While common terminology is needed, this should not come at the cost of pluralism in available approaches. As governments signal interest in and develop practical approaches, significant effort remains to guarantee their validity, reliability, and efficacy as tools for governance across the AI lifecycle.


Here are some thoughts:

The scoping review reveals a complex and varied landscape of practical approaches to ethical AI, marked by inconsistent terminology and a lack of consensus on defining characteristics such as purpose and target audience. Currently, there is no unified understanding of terms like "tools," "toolkits," and "frameworks" related to ethical AI, which complicates their implementation in governance. A clear categorization of these approaches is essential for policymakers, as the diversity in terminology and ethical principles suggests that no single method can effectively promote AI ethics. Implementing these approaches necessitates a comprehensive understanding of the operational context of AI and the ethical concerns involved.

While there is a pressing need to standardize terminology, this should not come at the expense of diversity, as different contexts may require distinct approaches. The review indicates significant variation in how these approaches apply across the AI lifecycle, with many focusing on early stages like design and development, while guidance for later stages is notably lacking. This gap may be influenced by the private sector's dominant role in AI system design and the associated governance mechanisms, which often prioritize reputational risk management over comprehensive ethical oversight.

The review raises three critical questions: First, whether the rise of practical approaches to AI ethics represents a business opportunity, potentially leading to a proliferation of options but lacking rigorous evaluation. Second, it questions the robustness of these approaches for monitoring AI systems, highlighting a shortage of practical methods for auditing and impact assessment. Third, it suggests that effective AI governance may require context-specific approaches, advocating for standards like "ethical disclosure by default" to enhance transparency and accountability.

Significant barriers to the adoption of these approaches have been identified, including the high levels of expertise and resources required, a general lack of awareness, and the absence of effective measurement methods for successful implementation. The review emphasizes the need for practical validation metrics to assess compliance with ethical principles, as measuring the impact of AI ethics remains challenging.

Sunday, September 22, 2024

The staggering death toll of scientific lies

Kelsey Piper
vox.com
Originally posted 23 Aug 24

Here is an excerpt:

The question of whether research fraud should be a crime

In some cases, research misconduct may be hard to distinguish from carelessness.

If a researcher fails to apply the appropriate statistical correction for multiple hypothesis testing, they will probably get some spurious results. In some cases, researchers are heavily incentivized to be careless in these ways by an academic culture that puts non-null results above all else (that is, rewarding researchers for finding an effect even if it is not a methodologically sound one, while being unwilling to publish sound research if it finds no effect).

But I’d argue it’s a bad idea to prosecute such behavior. It would produce a serious chilling effect on research, and likely make the scientific process slower and more legalistic — which also results in more deaths that could be avoided if science moved more freely.

So the conversation about whether to criminalize research fraud tends to focus on the most clear-cut cases: intentional falsification of data. Elisabeth Bik, a scientific researcher who studies fraud, made a name for herself by demonstrating that photographs of test results in many medical journals were clearly altered. That’s not the kind of thing that can be an innocent mistake, so it represents something of a baseline for how often manipulated data is published.

While technically some scientific fraud could fall under existing statutes that prohibit lying on, say, a grant application, in practice scientific fraud is more or less never prosecuted. Poldermans eventually lost his job in 2011, but most of his papers weren’t even retracted, and he faced no further consequences.


Here are some thoughts:

The case of Don Poldermans, a cardiologist who falsified data, resulting in thousands of deaths, highlights the severe consequences of scientific misconduct. This instance demonstrates how fraudulent research can have devastating effects on patients' lives. The fact that Poldermans' data was found to be fake, yet his research was still widely accepted and implemented, raises serious concerns about the accountability and oversight within the scientific community.

The current consequences for scientific fraud are often inadequate, allowing perpetrators to go unpunished or face minimal penalties. This lack of accountability creates an environment where misconduct can thrive, putting lives at risk. In Poldermans' case, he lost his job but faced no further consequences, despite the severity of his actions.

Prosecution or external oversight could provide the necessary accountability and shift incentives to address misconduct. However, prosecution is a blunt tool and may not be the best solution. Independent scientific review boards could also be effective in addressing scientific fraud. Ultimately, building institutions within the scientific community to police misconduct has had limited success, suggesting a need for external institutions to play a role.

The need for accountability and consequences for scientific fraud cannot be overstated. It is essential to prevent harm and ensure the integrity of research. By implementing measures to address misconduct, we can protect patients and maintain trust in the scientific community. The Poldermans case serves as a stark reminder of the importance of addressing scientific fraud and ensuring accountability.

Saturday, June 29, 2024

OpenAI insiders are demanding a “right to warn” the public

Sigal Samuel
Vox.com
Originally posted 5 June 24

Here is an excerpt:

To be clear, the signatories are not saying they should be free to divulge intellectual property or trade secrets, but as long as they protect those, they want to be able to raise concerns about risks. To ensure whistleblowers are protected, they want the companies to set up an anonymous process by which employees can report their concerns “to the company’s board, to regulators, and to an appropriate independent organization with relevant expertise.” 

An OpenAI spokesperson told Vox that current and former employees already have forums to raise their thoughts through leadership office hours, Q&A sessions with the board, and an anonymous integrity hotline.

“Ordinary whistleblower protections [that exist under the law] are insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated,” the signatories write in the proposal. They have retained a pro bono lawyer, Lawrence Lessig, who previously advised Facebook whistleblower Frances Haugen and whom the New Yorker once described as “the most important thinker on intellectual property in the Internet era.”


Here are some thoughts:

AI development is booming, but with great power comes great responsibility, typed the Spiderman fan.  AI researchers at OpenAI are calling for a "right to warn" the public about potential risks. In clinical psychology, we have a "duty to warn" for violent patients. This raises important ethical questions. On one hand, transparency and open communication are crucial for responsible AI development.  On the other hand, companies need to protect their ideas.  The key seems to lie in striking a balance.  Researchers should have safe spaces to voice concerns without fearing punishment, and clear guidelines can help ensure responsible disclosure without compromising confidential information.

Ultimately, fostering a culture of open communication is essential to ensure AI benefits society without creating unforeseen risks.  AI developers need similar ethical guidelines to psychologists in this matter.

Friday, May 10, 2024

Generative artificial intelligence and scientific publishing: urgent questions, difficult answers

J. Bagenal
The Lancet
March 06, 2024

Abstract

Azeem Azhar describes, in Exponential: Order and Chaos in an Age of Accelerating Technology, how human society finds it hard to imagine or process exponential growth and change and is repeatedly caught out by this phenomenon. Whether it is the exponential spread of a virus or the exponential spread of a new technology, such as the smartphone, people consistently underestimate its impact.  Whether it is the exponential spread of a virus or the exponential spread of a new technology, such as the smartphone, people consistently underestimate its impact. Azhar argues that an exponential gap has developed between technological progress and the pace at which institutions are evolving to deal with that progress. This is the case in scientific publishing with generative artificial intelligence (AI) and large language models (LLMs). There is guidance on the use of generative AI from organisations such as the International Committee of Medical Journal Editors. But across scholarly publishing such guidance is inconsistent. For example, one study of the 100 top global academic publishers and scientific journals found only 24% of academic publishers had guidance on the use of generative AI, whereas 87% of scientific journals provided such guidance. For those with guidance, 75% of publishers and 43% of journals had specific criteria for the disclosure of use of generative AI. In their book The Coming Wave, Mustafa Suleyman, co-founder and CEO of Inflection AI, and writer Michael Bhaskar warn that society is unprepared for the changes that AI will bring. They describe a person's or group's reluctance to confront difficult, uncertain change as the “pessimism aversion trap”. For journal editors and scientific publishers today, this is a dangerous trap to fall into. All the signs about generative AI in scientific publishing suggest things are not going to be ok.


From behind the paywall.

In 2023, Springer Nature became the first scientific publisher to create a new academic book by empowering authors to use generative Al. Researchers have shown that scientists found it difficult to distinguish between a human generated scientific abstract and one created by generative Al. Noam Chomsky has argued that generative Al undermines education and is nothing more than high-tech plagiarism, and many feel similarly about Al models trained on work without upholding copyright. Plagiarism is a problem in scientific publishing, but those concerned with research integrity are also considering a post- plagiarism world, in which hybrid human-Al writing becomes the norm and differentiating between the two becomes pointless. In the ideal scenario, human creativity is enhanced, language barriers disappear, and humans relinquish control but not responsibility.  Such an ideal scenario would be good.  But there are two urgent questions for scientific publishing.

First, how can scientific publishers and journal editors assure themselves that the research they are seeing is real? Researchers have used generative Al to create convincing fake clinical trial datasets to support a false scientific hypothesis that could only be identified when the raw data were scrutinised in detail by an expert. Papermills (nefarious businesses that generate poor or fake scientific studies and sell authorship) are a huge problem and contribute to the escalating number of research articles that are retracted by scientific publishers. The battle thus far has been between papermills becoming more sophisticated in their fabrication and ways of manipulating the editorial process and scientific publishers trying to find ways to detect and prevent these practices. Generative Al will turbocharge that race, but it might also break the papermill business model. When rogue academics use generative Al to fabricate datasets, they will not need to pay a papermill and will generate sham papers themselves. Fake studies will exponentially surge and nobody is doing enough to stop this inevitability.

Tuesday, April 16, 2024

As A.I.-Controlled Killer Drones Become Reality, Nations Debate Limits

Eric Lipton
The New York Times
Originally posted 21 Nov 23

Here is an excerpt:

Rapid advances in artificial intelligence and the intense use of drones in conflicts in Ukraine and the Middle East have combined to make the issue that much more urgent. So far, drones generally rely on human operators to carry out lethal missions, but software is being developed that soon will allow them to find and select targets more on their own.

The intense jamming of radio communications and GPS in Ukraine has only accelerated the shift, as autonomous drones can often keep operating even when communications are cut off.

“This isn’t the plot of a dystopian novel, but a looming reality,” Gaston Browne, the prime minister of Antigua and Barbuda, told officials at a recent U.N. meeting.

Pentagon officials have made it clear that they are preparing to deploy autonomous weapons in a big way.

Deputy Defense Secretary Kathleen Hicks announced this summer that the U.S. military would “field attritable, autonomous systems at scale of multiple thousands” in the coming two years, saying that the push to compete with China’s own investment in advanced weapons necessitated that the United States “leverage platforms that are small, smart, cheap and many.”

The concept of an autonomous weapon is not entirely new. Land mines — which detonate automatically — have been used since the Civil War. The United States has missile systems that rely on radar sensors to autonomously lock on to and hit targets.

What is changing is the introduction of artificial intelligence that could give weapons systems the capability to make decisions themselves after taking in and processing information.


Here is a summary:

This article discusses the debate at the UN regarding Lethal Autonomous Weapons (LAW) - essentially autonomous drones with AI that can choose and attack targets without human intervention. There are concerns that this technology could lead to unintended casualties, make wars more likely, and remove the human element from the decision to take a life.
  • Many countries are worried about the development and deployment of LAWs.
  • Austria and other countries are proposing a total ban on LAWs or at least strict regulations requiring human control and limitations on how they can be used.
  • The US, Russia, and China are opposed to a ban and argue that LAWs could potentially reduce civilian casualties in wars.
  • The US prefers non-binding guidelines over new international laws.
  • The UN is currently deadlocked on the issue with no clear path forward for creating regulations.

Thursday, April 4, 2024

Ready or not, AI chatbots are here to help with Gen Z’s mental health struggles

Matthew Perrone
AP.com
Originally posted 23 March 24

Here is an excerpt:

Earkick is one of hundreds of free apps that are being pitched to address a crisis in mental health among teens and young adults. Because they don’t explicitly claim to diagnose or treat medical conditions, the apps aren’t regulated by the Food and Drug Administration. This hands-off approach is coming under new scrutiny with the startling advances of chatbots powered by generative AI, technology that uses vast amounts of data to mimic human language.

The industry argument is simple: Chatbots are free, available 24/7 and don’t come with the stigma that keeps some people away from therapy.

But there’s limited data that they actually improve mental health. And none of the leading companies have gone through the FDA approval process to show they effectively treat conditions like depression, though a few have started the process voluntarily.

“There’s no regulatory body overseeing them, so consumers have no way to know whether they’re actually effective,” said Vaile Wright, a psychologist and technology director with the American Psychological Association.

Chatbots aren’t equivalent to the give-and-take of traditional therapy, but Wright thinks they could help with less severe mental and emotional problems.

Earkick’s website states that the app does not “provide any form of medical care, medical opinion, diagnosis or treatment.”

Some health lawyers say such disclaimers aren’t enough.


Here is my summary:

AI chatbots can provide personalized, 24/7 mental health support and guidance to users through convenient mobile apps. They use natural language processing and machine learning to simulate human conversation and tailor responses to individual needs.

 This can be especially beneficial for those who face barriers to accessing traditional in-person therapy, such as cost, location, or stigma.

Research has shown that AI chatbots can be effective in reducing the severity of mental health issues like anxiety, depression, and stress for diverse populations.  They can deliver evidence-based interventions like cognitive behavioral therapy and promote positive psychology.  Some well-known examples include Wysa, Woebot, Replika, Youper, and Tess.

However, there are also ethical concerns around the use of AI chatbots for mental health. There are risks of providing inadequate or even harmful support if the chatbot cannot fully understand the user's needs or respond empathetically. Algorithmic bias in the training data could also lead to discriminatory advice. It's crucial that users understand the limitations of the therapeutic relationship with an AI chatbot versus a human therapist.

Overall, AI chatbots have significant potential to expand access to mental health support, but must be developed and deployed responsibly with strong safeguards to protect user wellbeing. Continued research and oversight will be needed to ensure these tools are used effectively and ethically.

Thursday, March 14, 2024

A way forward for responsibility in the age of AI

Gogoshin, D.L.
Inquiry (2024)

Abstract

Whatever one makes of the relationship between free will and moral responsibility – e.g. whether it’s the case that we can have the latter without the former and, if so, what conditions must be met; whatever one thinks about whether artificially intelligent agents might ever meet such conditions, one still faces the following questions. What is the value of moral responsibility? If we take moral responsibility to be a matter of being a fitting target of moral blame or praise, what are the goods attached to them? The debate concerning ‘machine morality’ is often hinged on whether artificial agents are or could ever be morally responsible, and it is generally taken for granted (following Matthias 2004) that if they cannot, they pose a threat to the moral responsibility system and associated goods. In this paper, I challenge this assumption by asking what the goods of this system, if any, are, and what happens to them in the face of artificially intelligent agents. I will argue that they neither introduce new problems for the moral responsibility system nor do they threaten what we really (ought to) care about. I conclude the paper with a proposal for how to secure this objective.


Here is my summary:

While AI may not possess true moral agency, it's crucial to consider how the development and use of AI can be made more responsible. The author challenges the assumption that AI's lack of moral responsibility inherently creates problems for our current system of ethics. Instead, they focus on the "goods" this system provides, such as deserving blame or praise, and how these can be upheld even with AI's presence. To achieve this, the author proposes several steps, including:
  1. Shifting the focus from AI's moral agency to the agency of those who design, build, and use it. This means holding these individuals accountable for the societal impacts of AI.
  2. Developing clear ethical guidelines for AI development and use. These guidelines should be comprehensive, addressing issues like fairness, transparency, and accountability.
  3. Creating robust oversight mechanisms. This could involve independent bodies that monitor AI development and use, and have the power to intervene when necessary.
  4. Promoting public understanding of AI. This will help people make informed decisions about how AI is used in their lives and hold developers and users accountable.

Sunday, March 10, 2024

MAGA’s Violent Threats Are Warping Life in America

David French
New York Times - Opinion
Originally published 18 Feb 24

Amid the constant drumbeat of sensational news stories — the scandals, the legal rulings, the wild political gambits — it’s sometimes easy to overlook the deeper trends that are shaping American life. For example, are you aware how much the constant threat of violence, principally from MAGA sources, is now warping American politics? If you wonder why so few people in red America seem to stand up directly against the MAGA movement, are you aware of the price they might pay if they did?

Late last month, I listened to a fascinating NPR interview with the journalists Michael Isikoff and Daniel Klaidman regarding their new book, “Find Me the Votes,” about Donald Trump’s efforts to overturn the 2020 election. They report that Georgia prosecutor Fani Willis had trouble finding lawyers willing to help prosecute her case against Trump. Even a former Georgia governor turned her down, saying, “Hypothetically speaking, do you want to have a bodyguard follow you around for the rest of your life?”

He wasn’t exaggerating. Willis received an assassination threat so specific that one evening she had to leave her office incognito while a body double wearing a bulletproof vest courageously pretended to be her and offered a target for any possible incoming fire.


Here is my summary of the article:

David French discusses the pervasive threat of violence, particularly from MAGA sources, and its impact on American politics. The author highlights instances where individuals faced intimidation and threats for opposing the MAGA movement, such as a Georgia prosecutor receiving an assassination threat and judges being swatted. The article also mentions the significant increase in threats against members of Congress since Trump took office, with Capitol Police opening over 8,000 threat assessments in a year. The piece sheds light on the chilling effect these threats have on individuals like Mitt Romney, who spends $5,000 per day on security, and lawmakers who fear for their families' safety. The overall narrative underscores how these violent threats are shaping American life and politics

Saturday, March 2, 2024

Unraveling the Mindset of Victimhood

Scott Barry Kaufman
Scientific American
Originally posted 29 June 2020

Here is an excerpt:

Constantly seeking recognition of one’s victimhood. Those who score high on this dimension have a perpetual need to have their suffering acknowledged. In general, this is a normal psychological response to trauma. Experiencing trauma tends to “shatter our assumptions” about the world as a just and moral place. Recognition of one’s victimhood is a normal response to trauma and can help reestablish a person’s confidence in their perception of the world as a fair and just place to live.

Also, it is normal for victims to want the perpetrators to take responsibility for their wrongdoing and to express feelings of guilt. Studies conducted on testimonies of patients and therapists have found that validation of the trauma is important for therapeutic recovery from trauma and victimization (see here and here).

A sense of moral elitism. Those who score high on this dimension perceive themselves as having an immaculate morality and view everyone else as being immoral. Moral elitism can be used to control others by accusing others of being immoral, unfair or selfish, while seeing oneself as supremely moral and ethical.

Moral elitism often develops as a defense mechanism against deeply painful emotions and as a way to maintain a positive self-image. As a result, those under distress tend to deny their own aggressiveness and destructive impulses and project them onto others. The “other” is perceived as threatening whereas the self is perceived as persecuted, vulnerable and morally superior.


Here is a summary:

Kaufman explores the concept of "interpersonal victimhood," a tendency to view oneself as the repeated target of unfair treatment by others. He identifies several key characteristics of this mindset, including:
  • Belief in inherent unfairness: The conviction that the world is fundamentally unjust and that one is disproportionately likely to experience harm.
  • Moral self-righteousness: The perception of oneself as more ethical and deserving of good treatment compared to others.
  • Rumination on past injustices: Dwelling on and replaying negative experiences, often with feelings of anger and resentment.
  • Difficulty taking responsibility: Attributing negative outcomes to external factors rather than acknowledging one's own role.
Kaufman argues that while acknowledging genuine injustices is important, clinging to a victimhood identity can be detrimental. It can hinder personal growth, strain relationships, and fuel negativity. He emphasizes the importance of developing a more balanced perspective, acknowledging both external challenges and personal agency. The article offers strategies for fostering resilience

Wednesday, February 28, 2024

Scientists are on the verge of a male birth-control pill. Will men take it?

Jill Filipovic
The Guardian
Originally posted 18 Dec 23

Here is an excerpt:

The overwhelming share of responsibility for preventing pregnancy has always fallen on women. Throughout human history, women have gone to great lengths to prevent pregnancies they didn’t want, and end those they couldn’t prevent. Safe and reliable contraceptive methods are, in the context of how long women have sought to interrupt conception, still incredibly new. Measured by the lifespan of anyone reading this article, though, they are well established, and have for many decades been a normal part of life for millions of women around the world.

To some degree, and if only for obvious biological reasons, it makes sense that pregnancy prevention has historically fallen on women. But it also, as they say, takes two to tango – and only one of the partners has been doing all the work. Luckily, things are changing: thanks to generations of women who have gained unprecedented freedoms and planned their families using highly effective contraception methods, and thanks to men who have shifted their own gender expectations and become more involved partners and fathers, women and men have moved closer to equality than ever.

Among politically progressive couples especially, it’s now standard to expect that a male partner will do his fair share of the household management and childrearing (whether he actually does is a separate question, but the expectation is there). What men generally cannot do, though, is carry pregnancies and birth babies.


Here are some themes worthy of discussion:

Shifting responsibility: The potential availability of a reliable male contraceptive marks a significant departure from the historical norm where the burden of pregnancy prevention was primarily borne by women. This shift raises thought-provoking questions that delve into various aspects of societal dynamics.

Gender equality: A crucial consideration is whether men will willingly share responsibility for contraception on an equal footing, or whether societal norms will continue to exert pressure on women to take the lead in this regard.

Reproductive autonomy: The advent of accessible male contraception prompts contemplation on whether it will empower women to exert greater control over their reproductive choices, shaping the landscape of family planning.

Informed consent: An important facet of this shift involves how men will be informed about potential side effects and risks associated with the male contraceptive, particularly in comparison to existing female contraceptives.

Accessibility and equity: Concerns emerge regarding equitable access to the male contraceptive, particularly for marginalized communities. Questions arise about whether affordable and culturally appropriate access will be universally available, regardless of socioeconomic status or geographic location.

Coercion: There is a potential concern that the availability of a male contraceptive might be exploited to coerce women into sexual activity without their full and informed consent.

Psychological and social impact: The introduction of a male contraceptive brings with it potential psychological and social consequences that may not be immediately apparent.

Changes in sexual behavior: The availability of a male contraceptive may influence sexual practices and attitudes towards sex, prompting a reevaluation of societal norms.

Impact on relationships: The shift in responsibility for contraception could potentially cause tension or conflict in existing relationships as couples navigate the evolving dynamics.

Masculinity and stigma: The use of a male contraceptive may challenge traditional notions of masculinity, possibly leading to social stigma that individuals using the contraceptive may face.

Friday, February 2, 2024

Young people turning to AI therapist bots

Joe Tidy
BBC.com
Originally posted 4 Jan 24

Here is an excerpt:

Sam has been so surprised by the success of the bot that he is working on a post-graduate research project about the emerging trend of AI therapy and why it appeals to young people. Character.ai is dominated by users aged 16 to 30.

"So many people who've messaged me say they access it when their thoughts get hard, like at 2am when they can't really talk to any friends or a real therapist,"
Sam also guesses that the text format is one with which young people are most comfortable.
"Talking by text is potentially less daunting than picking up the phone or having a face-to-face conversation," he theorises.

Theresa Plewman is a professional psychotherapist and has tried out Psychologist. She says she is not surprised this type of therapy is popular with younger generations, but questions its effectiveness.

"The bot has a lot to say and quickly makes assumptions, like giving me advice about depression when I said I was feeling sad. That's not how a human would respond," she said.

Theresa says the bot fails to gather all the information a human would and is not a competent therapist. But she says its immediate and spontaneous nature might be useful to people who need help.
She says the number of people using the bot is worrying and could point to high levels of mental ill health and a lack of public resources.


Here are some important points-

Reasons for appeal:
  • Cost: Traditional therapy's expense and limited availability drive some towards bots, seen as cheaper and readily accessible.
  • Stigma: Stigma associated with mental health might make bots a less intimidating first step compared to human therapists.
  • Technology familiarity: Young people, comfortable with technology, find text-based interaction with bots familiar and less daunting than face-to-face sessions.
Concerns and considerations:
  • Bias: Bots trained on potentially biased data might offer inaccurate or harmful advice, reinforcing existing prejudices.
  • Qualifications: Lack of professional mental health credentials and oversight raises concerns about the quality of support provided.
  • Limitations: Bots aren't replacements for human therapists. Complex issues or severe cases require professional intervention.

Monday, November 6, 2023

Abuse Survivors ‘Disgusted’ by Southern Baptist Court Brief

Bob Smietana
Christianity Today
Originally published 26 OCT 23

Here is an excerpt:

Members of the Executive Committee, including Oklahoma pastor Mike Keahbone, expressed dismay at the brief, saying he and other members of the committee were blindsided by it. Keahbone, a member of a task force implementing abuse reforms in the SBC, said the brief undermined survivors such as Thigpen, Woodson, and Lively, who have supported the reforms.

“We’ve had survivors that have been faithful to give us a chance,” he told Religion News Service in a phone interview. “And we hurt them badly.”

The controversy over the amicus brief is the latest crisis for leaders of the nation’s largest Protestant denomination, which has dealt with a revolving door of leaders and rising legal costs in the aftermath of a sexual abuse crisis in recent years.

The denomination passed abuse reforms in 2022 but has been slow to implement them, relying mostly on a volunteer task force charged with convincing the SBC’s 47,000 congregations and a host of state and national entities to put those reforms into practice. Those delays have led survivors to be skeptical that things would actually change.

Earlier this week, ­the Louisville Courier Journal reported that lawyers for the Executive Committee, Southern Baptist Theological Seminary—the denomination’s flagship seminary in Louisville—and Lifeway had filed the amicus brief earlier this year in a case brought by abuse survivor Samantha Killary.


Here is my summary: 

In October 2023, the Southern Baptist Convention (SBC) filed an amicus curiae brief in the Kentucky Supreme Court arguing that a new law extending the statute of limitations for child sexual abuse claims should not apply retroactively. This filing sparked outrage among abuse survivors and some SBC leaders, who accused the denomination of prioritizing its own legal interests over the needs of victims.

The SBC's brief was filed in response to a lawsuit filed by a woman who was sexually abused as a child by a Louisville police officer. The woman is seeking to sue the city of Louisville and the police department, arguing that they should be held liable for her abuse because they failed to protect her.

The SBC's brief argues that the new statute of limitations should not apply retroactively because it would create a "windfall" for abuse survivors who would not have been able to sue under the previous law. The brief also argues that applying the new law retroactively would be unfair to institutions like the SBC, which could be faced with a flood of lawsuits.

Abuse survivors and some SBC leaders have criticized the brief as being insensitive to the needs of victims. They argue that the SBC is more interested in protecting itself from lawsuits than in ensuring that victims of abuse are able to seek justice.

In a joint statement, three abuse survivors said they were "sickened and saddened to be burned yet again by the actions of the SBC against survivors." They accused the SBC of "proactively choosing to side against a survivor and with an abuser and the institution that enabled his abuse."

Saturday, September 9, 2023

Academics Raise More Than $315,000 for Data Bloggers Sued by Harvard Business School Professor Gino

Neil H. Shah & Claire Yuan
The Crimson
Originally published 1 Sept 23

A group of academics has raised more than $315,000 through a crowdfunding campaign to support the legal expenses of the professors behind data investigation blog Data Colada — who are being sued for defamation by Harvard Business School professor Francesca Gino.

Supporters of the three professors — Uri Simonsohn, Leif D. Nelson, and Joseph P. Simmons — launched the GoFundMe campaign to raise funds for their legal fees after they were named in a $25 million defamation lawsuit filed by Gino last month.

In a series of four blog posts in June, Data Colada gave a detailed account of alleged research misconduct by Gino across four academic papers. Two of the papers were retracted following the allegations by Data Colada, while another had previously been retracted in September 2021 and a fourth is set to be retracted in September 2023.

Organizers wrote on GoFundMe that the fundraiser “hit 2,000 donors and $250K in less than 2 days” and that Simonsohn, Nelson, and Simmons “are deeply moved and grateful for this incredible show of support.”

Simine Vazire, one of the fundraiser’s organizers, said she was “pleasantly surprised” by the reaction throughout academia in support of Data Colada.

“It’s been really nice to see the consensus among the academic community, which is strikingly different than what I see on LinkedIn and the non-academic community,” she said.

Elisabeth M. Bik — a data manipulation expert who also helped organize the fundraiser — credited the outpouring of financial support to solidarity and concern among scientists.

“People are very concerned about this lawsuit and about the potential silencing effect this could have on people who criticize other people’s papers,” Bik said. “I think a lot of people want to support Data Colada for their legal defenses.”

Andrew T. Miltenberg — one of Gino’s attorneys — wrote in an emailed statement that the lawsuit is “not an indictment on Data Colada’s mission.”

Wednesday, August 16, 2023

A Federal Judge Asks: Does the Supreme Court Realize How Bad It Smells?

Michael Ponsor
The New York Times: Opinion
Originally posted 14 July 23

What has gone wrong with the Supreme Court’s sense of smell?

I joined the federal bench in 1984, some years before any of the justices currently on the Supreme Court. Throughout my career, I have been bound and guided by a written code of conduct, backed by a committee of colleagues I can call on for advice. In fact, I checked with a member of that committee before writing this essay.

A few times in my nearly 40 years on the bench, complaints have been filed against me. This is not uncommon for a federal judge. So far, none have been found to have merit, but all of these complaints have been processed with respect, and I have paid close attention to them.

The Supreme Court has avoided imposing a formal ethical apparatus on itself like the one that applies to all other federal judges. I understand the general concern, in part. A complaint mechanism could become a political tool to paralyze the court or a playground for gadflies. However, a skillfully drafted code could overcome this problem. Even a nonenforceable code that the justices formally pledged to respect would be an improvement on the current void.

Reasonable people may disagree on this. The more important, uncontroversial point is that if there will not be formal ethical constraints on our Supreme Court — or even if there will be — its justices must have functioning noses. They must keep themselves far from any conduct with a dubious aroma, even if it may not breach a formal rule.

The fact is, when you become a judge, stuff happens. Many years ago, as a fairly new federal magistrate judge, I was chatting about our kids with a local attorney I knew only slightly. As our conversation unfolded, he mentioned that he’d been planning to take his 10-year-old to a Red Sox game that weekend but their plan had fallen through. Would I like to use his tickets?

Sunday, July 23, 2023

How to Use AI Ethically for Ethical Decision-Making

Demaree-Cotton, J., Earp, B. D., & Savulescu, J.
(2022). American Journal of Bioethics, 22(7), 1–3.

Here is an excerpt:

The  kind  of AI  proposed  by  Meier  and  colleagues  (2022) has  the  fascinating  potential to improve the transparency of ethical decision-making, at least if it is used as a decision aid rather than a decision replacement (Savulescu & Maslen 2015). While artificial intelligence cannot itself engage in the human communicative process of justifying its decisions to patients, the AI they describe (unlike “black-box” AI) makes explicit which values and principles are involved and how much weight they are given.

By contrast, the moral principles or values underlying human moral intuition are not always consciously, introspectively accessible (Cushman, Young, and  Hauser  2006).  While humans sometimes have a fuzzy, intuitive sense of some of the factors that are relevant to their moral judgment, we often have strong moral intuitions without being sure of their source,  or  with- out being clear on precisely how strongly different  factors  played  a  role in  generating  the intuitions.  But  if  clinicians  make use  of  the AI  as a  decision  aid, this  could help  them  to transparently and precisely communicate the actual reasons behind their decision.

This is so even if the AI’s recommendation is ultimately rejected. Suppose, for example, that the AI recommends a course of action, with a certain amount of confidence, and it specifies the exact values or  weights it has assigned  to  autonomy versus  beneficence  in coming  to  this conclusion. Evaluating the recommendation made by the AI could help a committee make more explicit the “black box” aspects  of their own reasoning.  For example, the committee might decide that beneficence should actually be  weighted more heavily in this case than the AI suggests. Being able to understand the reason that their decision diverges from that of the AI gives them the opportunity to offer a further justifying reason as to why they think beneficence should be given more weight;  and  this,  in  turn, could improve the  transparency of their recommendation. 

However, the potential for the kind of AI described in the target article to improve the accuracy of moral decision-making may be more limited. This is so for two reasons. Firstly, whether AI can be expected to outperform human decision-making depends in part on the metrics used to train it. In non-ethical domains, superior accuracy can be achieved because the “verdicts” given to the AI in the training phase are not solely the human judgments that the AI is intended to replace or inform. Consider how AI can learn to detect lung cancer from scans at a superior rate to human radiologists after being trained on large datasets and being “told” which scans show cancer and which ones are cancer-free. Importantly, this training includes cases where radiologists  did  not  recognize  cancer  in  the  early  scans  themselves,  but  where  further information verified the correct diagnosis later on (Ardila et al. 2019). Consequently, these AIs are  able  to  detect  patterns  even in  early  scans  that  are  not  apparent  or  easily detectable  by human radiologists, leading to superior accuracy compared to human performance.