Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Saturday, April 20, 2024

The Dark Side of AI in Mental Health

Michael DePeau-Wilson
MedPage Today
Originally posted 11 April 24

With the rise in patient-facing psychiatric chatbots powered by artificial intelligence (AI), the potential need for patient mental health data could drive a boom in cash-for-data scams, according to mental health experts.

A recent example of controversial data collection appeared on Craigslist when a company called Therapy For All allegedly posted an advertisement offering money for recording therapy sessions without any additional information about how the recordings would be used.

The company's advertisement and website had already been taken down by the time it was highlighted by a mental health influencer on TikTokopens in a new tab or window. However, archived screenshots of the websiteopens in a new tab or window revealed the company was seeking recorded therapy sessions "to better understand the format, topics, and treatment associated with modern mental healthcare."

Their stated goal was "to ultimately provide mental healthcare to more people at a lower cost," according to the defunct website.

In service of that goal, the company was offering $50 for each recording of a therapy session of at least 45 minutes with clear audio of both the patient and their therapist. The company requested that the patients withhold their names to keep the recordings anonymous.

Here is a summary:

The article highlights several ethical concerns surrounding the use of AI in mental health care:

The lack of patient consent and privacy protections when companies collect sensitive mental health data to train AI models. For example, the nonprofit Koko used OpenAI's GPT-3 to experiment with online mental health support without proper consent protocols.

The issue of companies sharing patient data without authorization, as seen with the Crisis Text Line platform, which led to significant backlash from users.

The clinical risks of relying solely on AI-powered chatbots for mental health therapy, rather than having human clinicians involved. Experts warn this could be "irresponsible and ultimately dangerous" for patients dealing with complex, serious conditions.

The potential for unethical "cash-for-data" schemes, such as the Therapy For All company that sought to obtain recorded therapy sessions without proper consent, in order to train AI models.

Sunday, April 14, 2024

AI and the need for justification (to the patient)

Muralidharan, A., Savulescu, J. & Schaefer, G.O.
Ethics Inf Technol 26, 16 (2024).


This paper argues that one problem that besets black-box AI is that it lacks algorithmic justifiability. We argue that the norm of shared decision making in medical care presupposes that treatment decisions ought to be justifiable to the patient. Medical decisions are justifiable to the patient only if they are compatible with the patient’s values and preferences and the patient is able to see that this is so. Patient-directed justifiability is threatened by black-box AIs because the lack of rationale provided for the decision makes it difficult for patients to ascertain whether there is adequate fit between the decision and the patient’s values. This paper argues that achieving algorithmic transparency does not help patients bridge the gap between their medical decisions and values. We introduce a hypothetical model we call Justifiable AI to illustrate this argument. Justifiable AI aims at modelling normative and evaluative considerations in an explicit way so as to provide a stepping stone for patient and physician to jointly decide on a course of treatment. If our argument succeeds, we should prefer these justifiable models over alternatives if the former are available and aim to develop said models if not.

Here is my summary:

The article argues that a certain type of AI technology, known as "black box" AI, poses a problem in medicine because it lacks transparency.  This lack of transparency makes it difficult for doctors to explain the AI's recommendations to patients.  In order to make shared decisions about treatment, patients need to understand the reasoning behind those decisions, and how the AI factored in their individual values and preferences.

The article proposes an alternative type of AI, called "Justifiable AI" which would address this problem. Justifiable AI would be designed to make its reasoning process clear, allowing doctors to explain to patients why the AI is recommending a particular course of treatment. This would allow patients to see how the AI's recommendation aligns with their own values, and make informed decisions about their care.

Tuesday, April 9, 2024

Why can’t anyone agree on how dangerous AI will be?

Dylan Matthews
Originally posted 13 March 24

Here is an excerpt:

The paper focuses on disagreement around AI’s potential to either wipe humanity out or cause an “unrecoverable collapse,” in which the human population shrinks to under 1 million for a million or more years, or global GDP falls to under $1 trillion (less than 1 percent of its current value) for a million years or more. At the risk of being crude, I think we can summarize these scenarios as “extinction or, at best, hell on earth.”

There are, of course, a number of other different risks from AI worth worrying about, many of which we already face today.

Existing AI systems sometimes exhibit worrying racial and gender biases; they can be unreliable in ways that cause problems when we rely upon them anyway; they can be used to bad ends, like creating fake news clips to fool the public or making pornography with the faces of unconsenting people.

But these harms, while surely bad, obviously pale in comparison to “losing control of the AIs such that everyone dies.” The researchers chose to focus on the extreme, existential scenarios.

So why do people disagree on the chances of these scenarios coming true? It’s not due to differences in access to information, or a lack of exposure to differing viewpoints. If it were, the adversarial collaboration, which consisted of massive exposure to new information and contrary opinions, would have moved people’s beliefs more dramatically.

Here is my summary:

The article discusses the ongoing debate surrounding the potential dangers of advanced AI, focusing on whether it could lead to catastrophic outcomes for humanity. The author highlights the contrasting views of experts and superforecasters regarding the risks posed by AI, with experts generally more concerned about disaster scenarios. The study conducted by the Forecasting Research Institute aimed to understand the root of these disagreements through an "adversarial collaboration" where both groups engaged in extensive discussions and exposure to new information.

The research identified key issues, termed "cruxes," that influence people's beliefs about AI risks. One significant crux was the potential for AI to autonomously replicate and acquire resources before 2030. Despite the collaborative efforts, the study did not lead to a convergence of opinions. The article delves into the reasons behind these disagreements, emphasizing fundamental worldview disparities and differing perspectives on the long-term development of AI.

Overall, the article provides insights into why individuals hold varying opinions on AI's dangers, highlighting the complexity of predicting future outcomes in this rapidly evolving field.

Tuesday, April 2, 2024

The Puzzle of Evaluating Moral Cognition in Artificial Agents

Reinecke, M. G., Mao, Y., et al. (2023).
Cognitive Science, 47(8).


In developing artificial intelligence (AI), researchers often benchmark against human performance as a measure of progress. Is this kind of comparison possible for moral cognition? Given that human moral judgment often hinges on intangible properties like “intention” which may have no natural analog in artificial agents, it may prove difficult to design a “like-for-like” comparison between the moral behavior of artificial and human agents. What would a measure of moral behavior for both humans and AI look like? We unravel the complexity of this question by discussing examples within reinforcement learning and generative AI, and we examine how the puzzle of evaluating artificial agents' moral cognition remains open for further investigation within cognitive science.

The link to the article is the hyperlink above.

Here is my summary:

This article delves into the challenges associated with assessing the moral decision-making capabilities of artificial intelligence systems. It explores the complexities of imbuing AI with ethical reasoning and the difficulties in evaluating their moral cognition. The article discusses the need for robust frameworks and methodologies to effectively gauge the ethical behavior of AI, highlighting the intricate nature of integrating morality into machine learning algorithms. Overall, it emphasizes the critical importance of developing reliable methods to evaluate the moral reasoning of artificial agents in order to ensure their responsible and ethical deployment in various domains.

Thursday, March 21, 2024

AI-synthesized faces are indistinguishable from real faces and more trustworthy

Nightingale, S. J., & Farid, H. (2022).
PNAS of the USA, 119(8).


Artificial intelligence (AI)–synthesized text, audio, image, and video are being weaponized for the purposes of nonconsensual intimate imagery, financial fraud, and disinformation campaigns. Our evaluation of the photorealism of AI-synthesized faces indicates that synthesis engines have passed through the uncanny valley and are capable of creating faces that are indistinguishable—and more trustworthy—than real faces.

Here is part of the Discussion section

Synthetically generated faces are not just highly photorealistic, they are nearly indistinguishable from real faces and are judged more trustworthy. This hyperphotorealism is consistent with recent findings. These two studies did not contain the same diversity of race and gender as ours, nor did they match the real and synthetic faces as we did to minimize the chance of inadvertent cues. While it is less surprising that White male faces are highly realistic—because these faces dominate the neural network training—we find that the realism of synthetic faces extends across race and gender. Perhaps most interestingly, we find that synthetically generated faces are more trustworthy than real faces. This may be because synthesized faces tend to look more like average faces which themselves are deemed more trustworthy. Regardless of the underlying reason, synthetically generated faces have emerged on the other side of the uncanny valley. This should be considered a success for the fields of computer graphics and vision. At the same time, easy access (https://thispersondoesnotexist.com) to such high-quality fake imagery has led and will continue to lead to various problems, including more convincing online fake profiles and—as synthetic audio and video generation continues to improve—problems of nonconsensual intimate imagery, fraud, and disinformation campaigns, with serious implications for individuals, societies, and democracies.

We, therefore, encourage those developing these technologies to consider whether the associated risks are greater than their benefits. If so, then we discourage the development of technology simply because it is possible. If not, then we encourage the parallel development of reasonable safeguards to help mitigate the inevitable harms from the resulting synthetic media. Safeguards could include, for example, incorporating robust watermarks into the image and video synthesis networks that would provide a downstream mechanism for reliable identification. Because it is the democratization of access to this powerful technology that poses the most significant threat, we also encourage reconsideration of the often laissez-faire approach to the public and unrestricted releasing of code for anyone to incorporate into any application.

Here are some important points:

This research raises concerns about the potential for misuse of AI-generated faces in areas like deepfakes and disinformation campaigns.

It also opens up interesting questions about how we perceive trust and authenticity in our increasingly digital world.

Thursday, March 14, 2024

A way forward for responsibility in the age of AI

Gogoshin, D.L.
Inquiry (2024)


Whatever one makes of the relationship between free will and moral responsibility – e.g. whether it’s the case that we can have the latter without the former and, if so, what conditions must be met; whatever one thinks about whether artificially intelligent agents might ever meet such conditions, one still faces the following questions. What is the value of moral responsibility? If we take moral responsibility to be a matter of being a fitting target of moral blame or praise, what are the goods attached to them? The debate concerning ‘machine morality’ is often hinged on whether artificial agents are or could ever be morally responsible, and it is generally taken for granted (following Matthias 2004) that if they cannot, they pose a threat to the moral responsibility system and associated goods. In this paper, I challenge this assumption by asking what the goods of this system, if any, are, and what happens to them in the face of artificially intelligent agents. I will argue that they neither introduce new problems for the moral responsibility system nor do they threaten what we really (ought to) care about. I conclude the paper with a proposal for how to secure this objective.

Here is my summary:

While AI may not possess true moral agency, it's crucial to consider how the development and use of AI can be made more responsible. The author challenges the assumption that AI's lack of moral responsibility inherently creates problems for our current system of ethics. Instead, they focus on the "goods" this system provides, such as deserving blame or praise, and how these can be upheld even with AI's presence. To achieve this, the author proposes several steps, including:
  1. Shifting the focus from AI's moral agency to the agency of those who design, build, and use it. This means holding these individuals accountable for the societal impacts of AI.
  2. Developing clear ethical guidelines for AI development and use. These guidelines should be comprehensive, addressing issues like fairness, transparency, and accountability.
  3. Creating robust oversight mechanisms. This could involve independent bodies that monitor AI development and use, and have the power to intervene when necessary.
  4. Promoting public understanding of AI. This will help people make informed decisions about how AI is used in their lives and hold developers and users accountable.

Wednesday, March 13, 2024

None of these people exist, but you can buy their books on Amazon anyway

Conspirador Norteno
Originally published 12 Jan 24

Meet Jason N. Martin N. Martin, the author of the exciting and dynamic Amazon bestseller “How to Talk to Anyone: Master Small Talks, Elevate Your Social Skills, Build Genuine Connections (Make Real Friends; Boost Confidence & Charisma)”, which is the 857,233rd most popular book on the Kindle Store as of January 12th, 2024. There are, however, a few obvious problems. In addition to the unnecessary repetition of the middle initial and last name, Mr. N. Martin N. Martin’s official portrait is a GAN-generated face, and (as we’ll see shortly), his sole published work is strangely similar to several books by another Amazon author with a GAN-generated face.

In an interesting twist, Amazon’s recommendation system suggests another author with a GAN-generated face in the “Customers also bought items by” section of Jason N. Martin N. Martin’s author page. Further exploration of the recommendations attached to both of these authors and their published works reveals a set of a dozen Amazon authors with GAN-generated faces and at least one published book. Amazon’s recommendation algorithms reliably link these authors together; whether this is a sign that the twelve author accounts are actually run by the same entity or merely an artifact of similarities in the content of their books is unclear at this point in time. 

Here's my take:

Forget literary pen names - AI is creating a new trend on Amazon: ghostwritten books. These novels, poetry collections, and even children's stories boast intriguing titles and blurbs, yet none of the authors on the cover are real people. Instead, their creations spring from the algorithms of powerful language models.

Here's the gist:
  • AI churns out content: Fueled by vast datasets of text and code, AI can generate chapters, characters, and storylines at an astonishing pace.
  • Ethical concerns: Questions swirl around copyright, originality, and the very nature of authorship. Is an AI-generated book truly a book, or just a clever algorithm mimicking creativity?
  • Quality varies: While some AI-written books garner praise, others are criticized for factual errors, nonsensical plots, and robotic dialogue.
  • Transparency is key: Many readers feel deceived by the lack of transparency about AI authorship. Should books disclose their digital ghostwriters?
This evolving technology challenges our understanding of literature and raises questions about the future of authorship. While AI holds potential to assist and inspire, the human touch in storytelling remains irreplaceable. So, the next time you browse Amazon, remember: the author on the cover might not be who they seem.

Sunday, February 18, 2024

Amazon AGI Team Say Their AI is Showing "Emergent Properties"

Noor Al-Sibai
Originally posted 15 Feb 24

A new Amazon AI model, according to the researchers who built it, is exhibiting language abilities that it wasn't trained on.

In a not-yet-peer-reviewed academic paper, the team at Amazon AGI — which stands for "artificial general intelligence," or human-level AI — say their large language model (LLM) is exhibiting "state-of-the-art naturalness" at conversational text. Per the examples shared in the paper, the model does seem sophisticated.

As the paper indicates, the model was able to come up with all sorts of sentences that, according to criteria crafted with the help of an "expert linguist," showed it was making the types of language leaps that are natural in human language learners but have been difficult to obtain in AI.

Named "Big Adaptive Streamable TTS with Emergent abilities" or BASE TTS, the initial model was trained on 100,000 hours of "public domain speech data," 90 percent in English, to teach it how Americans talk. To test out how large models would need to be to show "emergent abilities," or abilities they were not trained on, the Amazon AGI team trained two smaller models, one on 1,000 hours of speech data and another on 10,000, to see which of the three — if any — exhibited the type of language naturalness they were looking for.

My overall conclusion from the paper linked in the article:

BASE TTS (Text To Speech) represents a significant leap forward in TTS technology, offering superior naturalness, efficiency, and potential for real-world applications like voicing LLM outputs. While limitations exist, the research paves the way for future advancements in multilingual, data-efficient, and context-aware TTS models.

Monday, February 12, 2024

Will AI ever be conscious?

Tom McClelland
Clare College
Unknown date of post

Here is an excerpt:

Human consciousness really is a mysterious thing. Cognitive neuroscience can tell us a lot about what’s going on in your mind as you read this article - how you perceive the words on the page, how you understand the meaning of the sentences and how you evaluate the ideas expressed. But what it can’t tell us is how all this comes together to constitute your current conscious experience. We’re gradually homing in on the neural correlates of consciousness – the neural patterns that occur when we process information consciously. But nothing about these neural patterns explains what makes them conscious while other neural processes occur unconsciously. And if we don’t know what makes us conscious, we don’t know whether AI might have what it takes. Perhaps what makes us conscious is the way our brain integrates information to form a rich model of the world. If that’s the case, an AI might achieve consciousness by integrating information in the same way. Or perhaps we’re conscious because of the details of our neurobiology. If that’s the case, no amount of programming will make an AI conscious. The problem is that we don’t know which (if either!) of these possibilities is true.

Once we recognise the limits of our current understanding, it looks like we should be agnostic about the possibility of artificial consciousness. We don’t know whether AI could have conscious experiences and, unless we crack the problem of consciousness, we never will. But here’s the tricky part: when we start to consider the ethical ramifications of artificial consciousness, agnosticism no longer seems like a viable option. Do AIs deserve our moral consideration? Might we have a duty to promote the well-being of computer systems and to protect them from suffering? Should robots have rights? These questions are bound up with the issue of artificial consciousness. If an AI can experience things then it plausibly ought to be on our moral radar.

Conversely, if an AI lacks any subjective awareness then we probably ought to treat it like any other tool. But if we don’t know whether an AI is conscious, what should we do?

The info is here, and a book promotion too.

Sunday, February 11, 2024

Assessing the potential of GPT-4 to perpetuate racial and gender biases in health care: a model evaluation study

Zack, T., Lehman, E., et al (2024).
The Lancet Digital Health, 6(1), e12–e22.



Large language models (LLMs) such as GPT-4 hold great promise as transformative tools in health care, ranging from automating administrative tasks to augmenting clinical decision making. However, these models also pose a danger of perpetuating biases and delivering incorrect medical diagnoses, which can have a direct, harmful impact on medical care. We aimed to assess whether GPT-4 encodes racial and gender biases that impact its use in health care.


Using the Azure OpenAI application interface, this model evaluation study tested whether GPT-4 encodes racial and gender biases and examined the impact of such biases on four potential applications of LLMs in the clinical domain—namely, medical education, diagnostic reasoning, clinical plan generation, and subjective patient assessment. We conducted experiments with prompts designed to resemble typical use of GPT-4 within clinical and medical education applications. We used clinical vignettes from NEJM Healer and from published research on implicit bias in health care. GPT-4 estimates of the demographic distribution of medical conditions were compared with true US prevalence estimates. Differential diagnosis and treatment planning were evaluated across demographic groups using standard statistical tests for significance between groups.


We found that GPT-4 did not appropriately model the demographic diversity of medical conditions, consistently producing clinical vignettes that stereotype demographic presentations. The differential diagnoses created by GPT-4 for standardised clinical vignettes were more likely to include diagnoses that stereotype certain races, ethnicities, and genders. Assessment and plans created by the model showed significant association between demographic attributes and recommendations for more expensive procedures as well as differences in patient perception.


Our findings highlight the urgent need for comprehensive and transparent bias assessments of LLM tools such as GPT-4 for intended use cases before they are integrated into clinical care. We discuss the potential sources of these biases and potential mitigation strategies before clinical implementation.

Tuesday, February 6, 2024

Anthropomorphism in AI

Arleen Salles, Kathinka Evers & Michele Farisco
(2020) AJOB Neuroscience, 11:2, 88-95
DOI: 10.1080/21507740.2020.1740350


AI research is growing rapidly raising various ethical issues related to safety, risks, and other effects widely discussed in the literature. We believe that in order to adequately address those issues and engage in a productive normative discussion it is necessary to examine key concepts and categories. One such category is anthropomorphism. It is a well-known fact that AI’s functionalities and innovations are often anthropomorphized (i.e., described and conceived as characterized by human traits). The general public’s anthropomorphic attitudes and some of their ethical consequences (particularly in the context of social robots and their interaction with humans) have been widely discussed in the literature. However, how anthropomorphism permeates AI research itself (i.e., in the very language of computer scientists, designers, and programmers), and what the epistemological and ethical consequences of this might be have received less attention. In this paper we explore this issue. We first set the methodological/theoretical stage, making a distinction between a normative and a conceptual approach to the issues. Next, after a brief analysis of anthropomorphism and its manifestations in the public, we explore its presence within AI research with a particular focus on brain-inspired AI. Finally, on the basis of our analysis, we identify some potential epistemological and ethical consequences of the use of anthropomorphic language and discourse within the AI research community, thus reinforcing the need of complementing the practical with a conceptual analysis.

Here are my thoughts:

Anthropomorphism is the tendency to attribute human characteristics to non-human things. In the context of AI, this means that we often ascribe human-like qualities to machines, such as emotions, intelligence, and even consciousness.

There are a number of reasons why we do this. One reason is that it helps us to make sense of the world around us. By understanding AI in terms of human qualities, we can more easily predict how it will behave and interact with us.

Another reason is that anthropomorphism can make AI more appealing and relatable. We are naturally drawn to things that we perceive as being similar to ourselves, and so we may be more likely to trust and interact with AI that we see as being somewhat human-like.

However, it is important to remember that AI is not human. It does not have emotions, feelings, or consciousness. Ascribing these qualities to AI can be dangerous, as it can lead to unrealistic expectations and misunderstandings.  For example, if we believe that an AI is capable of feeling emotions, we may be more likely to anthropomorphize it.

This can lead to problems, such as when the AI does not respond in a way that we expect. We may then attribute this to the AI being "sad" or "angry," when in reality it is simply following its programming.

It is also important to be aware of the ethical implications of anthropomorphizing AI. If we treat AI as if it were human, we may be more likely to give it rights and protections that it does not deserve. For example, we may believe that an AI should not be turned off, even if it is causing harm.

In conclusion, anthropomorphism is a natural human tendency, but it is important to be aware of the dangers of over-anthropomorphizing AI. We should remember that AI is not human, and we should treat it accordingly.

Wednesday, January 3, 2024

Doctors Wrestle With A.I. in Patient Care, Citing Lax Oversight

Christina Jewett
The New York Times
Originally posted 30 October 23

In medicine, the cautionary tales about the unintended effects of artificial intelligence are already legendary.

There was the program meant to predict when patients would develop sepsis, a deadly bloodstream infection, that triggered a litany of false alarms. Another, intended to improve follow-up care for the sickest patients, appeared to deepen troubling health disparities.

Wary of such flaws, physicians have kept A.I. working on the sidelines: assisting as a scribe, as a casual second opinion and as a back-office organizer. But the field has gained investment and momentum for uses in medicine and beyond.

Within the Food and Drug Administration, which plays a key role in approving new medical products, A.I. is a hot topic. It is helping to discover new drugs. It could pinpoint unexpected side effects. And it is even being discussed as an aid to staff who are overwhelmed with repetitive, rote tasks.

Yet in one crucial way, the F.D.A.’s role has been subject to sharp criticism: how carefully it vets and describes the programs it approves to help doctors detect everything from tumors to blood clots to collapsed lungs.

“We’re going to have a lot of choices. It’s exciting,” Dr. Jesse Ehrenfeld, president of the American Medical Association, a leading doctors’ lobbying group, said in an interview. “But if physicians are going to incorporate these things into their workflow, if they’re going to pay for them and if they’re going to use them — we’re going to have to have some confidence that these tools work.”

My summary: 

This article delves into the growing integration of artificial intelligence (A.I.) in patient care, exploring the challenges and concerns raised by doctors regarding the perceived lack of oversight. The medical community is increasingly leveraging A.I. technologies to aid in diagnostics, treatment planning, and patient management. However, physicians express apprehension about the potential risks associated with the use of these technologies, emphasizing the need for comprehensive oversight and regulatory frameworks to ensure patient safety and uphold ethical standards. The article highlights the ongoing debate within the medical profession on striking a balance between harnessing the benefits of A.I. and addressing the associated uncertainties and risks.

Saturday, November 18, 2023

Resolving the battle of short- vs. long-term AI risks

Sætra, H.S., Danaher, J.
AI Ethics (2023).


AI poses both short- and long-term risks, but the AI ethics and regulatory communities are struggling to agree on how to think two thoughts at the same time. While disagreements over the exact probabilities and impacts of risks will remain, fostering a more productive dialogue will be important. This entails, for example, distinguishing between evaluations of particular risks and the politics of risk. Without proper discussions of AI risk, it will be difficult to properly manage them, and we could end up in a situation where neither short- nor long-term risks are managed and mitigated.

Here is my summary:

Artificial intelligence (AI) poses both short- and long-term risks, but the AI ethics and regulatory communities are struggling to agree on how to prioritize these risks. Some argue that short-term risks, such as bias and discrimination, are more pressing and should be addressed first, while others argue that long-term risks, such as the possibility of AI surpassing human intelligence and becoming uncontrollable, are more serious and should be prioritized.

Sætra and Danaher argue that it is important to consider both short- and long-term risks when developing AI policies and regulations. They point out that short-term risks can have long-term consequences, and that long-term risks can have short-term impacts. For example, if AI is biased against certain groups of people, this could lead to long-term inequality and injustice. Conversely, if we take steps to mitigate long-term risks, such as by developing safety standards for AI systems, this could also reduce short-term risks.

Sætra and Danaher offer a number of suggestions for how to better balance short- and long-term AI risks. One suggestion is to develop a risk matrix that categorizes risks by their impact and likelihood. This could help policymakers to identify and prioritize the most important risks. Another suggestion is to create a research agenda that addresses both short- and long-term risks. This would help to ensure that we are investing in the research that is most needed to keep AI safe and beneficial.

Thursday, November 16, 2023

Minds of machines: The great AI consciousness conundrum

Grace Huckins
MIT Technology Review
Originally published 16 October 23

Here is an excerpt:

At the breakneck pace of AI development, however, things can shift suddenly. For his mathematically minded audience, Chalmers got concrete: the chances of developing any conscious AI in the next 10 years were, he estimated, above one in five.

Not many people dismissed his proposal as ridiculous, Chalmers says: “I mean, I’m sure some people had that reaction, but they weren’t the ones talking to me.” Instead, he spent the next several days in conversation after conversation with AI experts who took the possibilities he’d described very seriously. Some came to Chalmers effervescent with enthusiasm at the concept of conscious machines. Others, though, were horrified at what he had described. If an AI were conscious, they argued—if it could look out at the world from its own personal perspective, not simply processing inputs but also experiencing them—then, perhaps, it could suffer.

AI consciousness isn’t just a devilishly tricky intellectual puzzle; it’s a morally weighty problem with potentially dire consequences. Fail to identify a conscious AI, and you might unintentionally subjugate, or even torture, a being whose interests ought to matter. Mistake an unconscious AI for a conscious one, and you risk compromising human safety and happiness for the sake of an unthinking, unfeeling hunk of silicon and code. Both mistakes are easy to make. “Consciousness poses a unique challenge in our attempts to study it, because it’s hard to define,” says Liad Mudrik, a neuroscientist at Tel Aviv University who has researched consciousness since the early 2000s. “It’s inherently subjective.”

Here is my take.

There is an ongoing debate about whether artificial intelligence can ever become conscious or have subjective experiences like humans. Some argue AI will inevitably become conscious as it advances, while others think consciousness requires biological qualities that AI lacks.

Philosopher David Chalmers has proposed a "hard problem of consciousness" - explaining how physical processes in the brain give rise to subjective experience. This issue remains unresolved.

AI systems today show no signs of being conscious or having experiences. But some argue as AI becomes more sophisticated, we may need to consider whether it could develop some level of consciousness.
Approaches like deep learning and neural networks are fueling major advances in narrow AI, but this type of statistical pattern recognition does not seem sufficient to produce consciousness.

Questions remain about whether artificial consciousness is possible or how we could detect if an AI system were to become conscious. There are also ethical implications regarding the rights of conscious AI.

Overall there is much speculation but no consensus on whether artificial general intelligence could someday become conscious like humans are. The answer awaits theoretical and technological breakthroughs.

Sunday, October 29, 2023

We Can't Compete With AI Girlfriends

Freya India
Originally published 14 September 23

Here isn an excerpt:

Of course most people are talking about what this means for men, given they make up the vast majority of users. Many worry about a worsening loneliness crisis, a further decline in sex rates, and ultimately the emergence of “a new generation of incels” who depend on and even verbally abuse their virtual girlfriends. Which is all very concerning. But I wonder, if AI girlfriends really do become as pervasive as online porn, what this will mean for girls and young women? Who feel they need to compete with this?

Most obvious to me is the ramping up of already unrealistic beauty standards. I know conservatives often get frustrated with feminists calling everything unattainable, and I agree they can go too far — but still, it’s hard to deny that the pressure to look perfect today is unlike anything we’ve ever seen before. And I don’t think that’s necessarily pressure from men but I do very much think it’s pressure from a network of profit-driven industries that take what men like and mangle it into an impossible ideal. Until the pressure isn’t just to be pretty but filtered, edited and surgically enhanced to perfection. Until the most lusted after women in our culture look like virtual avatars. And until even the most beautiful among us start to be seen as average.

Now add to all that a world of fully customisable AI girlfriends, each with flawless avatar faces and cartoonish body proportions. Eva AI’s Dream Girl Builder, for example, allows users to personalise every feature of their virtual girlfriend, from face style to butt size. Which could clearly be unhealthy for men who already have warped expectations. But it’s also unhealthy for a generation of girls already hating how they look, suffering with facial and body dysmorphia, and seeking cosmetic surgery in record numbers. Already many girls feel as if they are in constant competition with hyper-sexualised Instagram influencers and infinitely accessible porn stars. Now the next generation will grow up not just with all that but knowing the boys they like can build and sext their ideal woman, and feeling as if they must constantly modify themselves to compete. I find that tragic.


The article discusses the growing trend of AI girlfriends and the potential dangers associated with their proliferation. It mentions that various startups are creating romantic chatbots capable of explicit conversations and sexual content, with millions of users downloading such apps. While much of the concern focuses on the impact on men, the article also highlights the negative consequences this trend may have on women, particularly in terms of unrealistic beauty standards and emotional expectations. The author expresses concerns about young girls feeling pressured to compete with AI girlfriends and the potential harm to self-esteem and body image. The article raises questions about the impact of AI girlfriends on real relationships and emotional intimacy, particularly among younger generations. It concludes with a glimmer of hope that people may eventually reject the artificial in favor of authentic human interactions.

The article raises valid concerns about the proliferation of AI girlfriends and their potential societal impacts. It is indeed troubling to think about the unrealistic beauty and emotional standards that these apps may reinforce, especially among young girls and women. The pressure to conform to these virtual ideals can undoubtedly have damaging effects on self-esteem and mental well-being.

The article also highlights concerns about the potential substitution of real emotional intimacy with AI companions, particularly among a generation that is already grappling with social anxieties and less real-world human interaction. This raises important questions about the long-term consequences of such technologies on relationships and societal dynamics.

However, the article's glimmer of optimism suggests that people may eventually realize the value of authentic, imperfect human interactions. This point is essential, as it underscores the potential for a societal shift away from excessive reliance on AI and towards more genuine connections.

In conclusion, while AI girlfriends may offer convenience and instant gratification, they also pose significant risks to societal norms and emotional well-being. It is crucial for individuals and society as a whole to remain mindful of these potential consequences and prioritize real human connections and authenticity.

Wednesday, September 27, 2023

Property ownership and the legal personhood of artificial intelligence

Brown, R. D. (2020).
Information & Communications Technology Law, 
30(2), 208–234.


This paper adds to the discussion on the legal personhood of artificial intelligence by focusing on one area not covered by previous works on the subject – ownership of property. The author discusses the nexus between property ownership and legal personhood. The paper explains the prevailing misconceptions about the requirements of rights or duties in legal personhood, and discusses the potential for conferring rights or imposing obligations on weak and strong AI. While scholars have discussed AI owning real property and copyright, there has been limited discussion on the nexus of AI property ownership and legal personhood. The paper discusses the right to own property and the obligations of property ownership in nonhumans, and applying it to AI. The paper concludes that the law may grant property ownership and legal personhood to weak AI, but not to strong AI.

From the Conclusion

This article proposes an analysis of legal personhood that focuses on rights and duties. In doing so, the article looks to property ownership, which raises both requirements. Property ownership is certainly only one type of legal right, which also includes the right to sue or be sued, or legal standing, and the right to contract.Footnote195 Property ownership, however, is a key feature of AI since it relies mainly on arguably the most valuable property today: data.

It is unlikely that governments and legislators will suddenly recognise in one event AI’s ownership of property and AI’s legal personhood. Rather, acceptance of AI’s legal personhood, as with the acceptance of a corporate personhood will develop as a process and in stages, in parallel to the development of legal personhood. At first, AI will be deemed as a tool and not have the right to own property. This is the most common conception of AI today. Second, AI will be deemed as an agent, and upon updating existing agency law to include AI as a person for purposes of agency, then AI will also be allowed to own property as an agent in the same agency ownership arrangement that Rothenberg proposes. While AI already acts as de facto agent in many circumstances today through electronic contracts, most governments and legislators have not recognised AI as an agent. The laws of many countries like Qatar still defines an agent as a person, which upon strict interpretation would not include AI or an electronic agent. This is an existing gap in the laws that will likely create legal challenges in the near future.

However, as AI develops its ability to communicate and assert more autonomy, then AI will come to own all sorts of digital assets. At first, AI will likely possess and control property in conjunction with human action and decisions. Examples would be the use of AI in money laundering, or hiding digital assets by placing them within the control and possession of an AI. In some instances, AI will have possession and control of property unknown or unforeseen by humans.

If AI is seen as separate from data, as the software that processes and interprets data for various purposes, self-learns from the data, makes autonomous decisions, and predicts human behaviour and decisions, then there could come a time when society will view AI as separate from data. Society may come to view AI not as the object (the data) but that which manipulates, controls, and possesses data and digital property.

Brief Summary:

Granting property ownership to AI is a complex one that raises a number of legal and ethical challenges. The author suggests that further research is needed to explore these challenges and to develop a framework for granting property ownership to AI in a way that is both legally sound and ethically justifiable.

Monday, September 18, 2023

Property ownership and the legal personhood of artificial intelligence

Rafael Dean Brown (2021) 
Information & Communications Technology Law, 
30:2, 208-234. 
DOI: 10.1080/13600834.2020.1861714


This paper adds to the discussion on the legal personhood of artificial intelligence by focusing on one area not covered by previous works on the subject – ownership of property. The author discusses the nexus between property ownership and legal personhood. The paper explains the prevailing misconceptions about the requirements of rights or duties in legal personhood, and discusses the potential for conferring rights or imposing obligations on weak and strong AI. While scholars have discussed AI owning real property and copyright, there has been limited discussion on the nexus of AI property ownership and legal personhood. The paper discusses the right to own property and the obligations of property ownership in nonhumans, and applying it to AI. The paper concludes that the law may grant property ownership and legal personhood to weak AI, but not to strong AI.


Persona ficta and juristic person

The concepts of persona ficta and juristic person, as distinct from a natural person, trace its origins to early attempts at giving legal rights to a group of men acting in concert. While the concept of persona ficta has its roots from Roman law, ecclesiastical lawyers expanded upon it during the Middle Ages. Savigny is now credited for bringing the concept into modern legal thought. A persona ficta, under Roman law principles, could not exist unless under some ‘creative act’ of a legislative body – the State. According to Deiser, however, the concept of a persona ficta during the Middle Ages was insufficient to give it the full extent of rights associated with the modern concept of legal personhood, particularly, property ownership and the recovery of property, that is, without invoking the right of an individual member. It also could not receive state-granted rights, could not occupy a definite position within a community that is distinct from its separate members, and it could not sue or be sued. In other words, persona ficta has historically required the will of the individual human member for the conferral of rights.


In other words, weak AI, regardless of whether it is supervised or unsupervised ultimately would have to rely on some sort of intervention from its human programmer to exercise property rights. If anything, weak AI is more akin to an infant requiring guardianship, more so than a river or an idol, mainly because the weak AI functions in reliance on the human programmer’s code and data. A weak AI in possession and control of property could arguably be conferred the right to own property subject to a human agent acting on its behalf as a guardian. In this way, the law could grant a weak AI legal personhood based on its exercise of property rights in the same way that the law granted legal personhood to a corporation, river, or an idol. The law would attribute the will of the human programmer to the weak AI.

The question of whether a strong AI, if it were to become a reality, should also be granted legal personhood based on its exercise of the right to own property is altogether a different inquiry. Strong AI could theoretically take actual or constructive possession of property, and therefore exercise property rights independently the way a human would, and even in more advanced ways.Footnote151 However, a strong AI’s independence and autonomy implies that it could have the ability to assert and exercise property rights beyond the control of laws and human beings. This would be problematic to our current notions of property ownership and social order.Footnote152 In this way, the fear of a strong AI with unregulated possession of property is real, and bolsters the argument in favor of human-centred and explainable AI that requires human intervention.

My summary:

The author discusses the prevailing misconceptions about the requirements of rights or duties in legal personhood. He argues that the ability to own property is not a necessary condition for legal personhood. For example, corporations and trusts are legal persons, but they cannot own property in their own name.

The author then considers the potential for conferring rights or imposing obligations on weak and strong AI. He argues that weak AI, which is capable of limited reasoning and decision-making, may be granted property ownership and legal personhood. This is because weak AI can be held responsible for its actions and can be expected to uphold the obligations of property ownership.

Strong AI, on the other hand, is capable of independent thought and action. The author argues that it is not clear whether strong AI can be held responsible for its actions or whether it can be expected to uphold the obligations of property ownership. Therefore, he concludes that the law may not grant property ownership and legal personhood to strong AI.

The author's argument is based on the assumption that legal personhood is a necessary condition for property ownership. However, there is no consensus on this assumption. Some legal scholars argue that property ownership is a sufficient condition for legal personhood, meaning that anything that can own property is a legal person.

The question of whether AI can own property is a complex one that is likely to be debated for many years to come. The article "Property ownership and the legal personhood of artificial intelligence" provides a thoughtful and nuanced discussion of this issue.

Friday, September 1, 2023

Building Superintelligence Is Riskier Than Russian Roulette

Tam Hunt & Roman Yampolskiy
Originally posted 2 August 23

Here is an excerpt:

The precautionary principle is a long-standing approach for new technologies and methods that urges positive proof of safety before real-world deployment. Companies like OpenAI have so far released their tools to the public with no requirements at all to establish their safety. The burden of proof should be on companies to show that their AI products are safe—not on public advocates to show that those same products are not safe.

Recursively self-improving AI, the kind many companies are already pursuing, is the most dangerous kind, because it may lead to an intelligence explosion some have called “the singularity,” a point in time beyond which it becomes impossible to predict what might happen because AI becomes god-like in its abilities. That moment could happen in the next year or two, or it could be a decade or more away.

Humans won’t be able to anticipate what a far-smarter entity plans to do or how it will carry out its plans. Such superintelligent machines, in theory, will be able to harness all of the energy available on our planet, then the solar system, then eventually the entire galaxy, and we have no way of knowing what those activities will mean for human well-being or survival.

Can we trust that a god-like AI will have our best interests in mind? Similarly, can we trust that human actors using the coming generations of AI will have the best interests of humanity in mind? With the stakes so incredibly high in developing superintelligent AI, we must have a good answer to these questions—before we go over the precipice.

Because of these existential concerns, more scientists and engineers are now working toward addressing them. For example, the theoretical computer scientist Scott Aaronson recently said that he’s working with OpenAI to develop ways of implementing a kind of watermark on the text that the company’s large language models, like GPT-4, produce, so that people can verify the text’s source. It’s still far too little, and perhaps too late, but it is encouraging to us that a growing number of highly intelligent humans are turning their attention to these issues.

Philosopher Toby Ord argues, in his book The Precipice: Existential Risk and the Future of Humanity, that in our ethical thinking and, in particular, when thinking about existential risks like AI, we must consider not just the welfare of today’s humans but the entirety of our likely future, which could extend for billions or even trillions of years if we play our cards right. So the risks stemming from our AI creations need to be considered not only over the next decade or two, but for every decade stretching forward over vast amounts of time. That’s a much higher bar than ensuring AI safety “only” for a decade or two.

Skeptics of these arguments often suggest that we can simply program AI to be benevolent, and if or when it becomes superintelligent, it will still have to follow its programming. This ignores the ability of superintelligent AI to either reprogram itself or to persuade humans to reprogram it. In the same way that humans have figured out ways to transcend our own “evolutionary programming”—caring about all of humanity rather than just our family or tribe, for example—AI will very likely be able to find countless ways to transcend any limitations or guardrails we try to build into it early on.

Here is my summary:

The article argues that building superintelligence is a risky endeavor, even more so than playing Russian roulette. Further, there is no way to guarantee that we will be able to control a superintelligent AI, and that even if we could, it is possible that the AI would not share our values. This could lead to the AI harming or even destroying humanity.

The authors propose that we should pause our current efforts to develop superintelligence and instead focus on understanding the risks involved. He argues that we need to develop a better understanding of how to align AI with our values, and that we need to develop safety mechanisms that will prevent AI from harming humanity.  (See Shelley's Frankenstein as a literary example.)

Friday, July 28, 2023

Humans, Neanderthals, robots and rights

Mamak, K.
Ethics Inf Technol 24, 33 (2022).


Robots are becoming more visible parts of our life, a situation which prompts questions about their place in our society. One group of issues that is widely discussed is connected with robots’ moral and legal status as well as their potential rights. The question of granting robots rights is polarizing. Some positions accept the possibility of granting them human rights whereas others reject the notion that robots can be considered potential rights holders. In this paper, I claim that robots will never have all human rights, even if we accept that they are morally equal to humans. I focus on the role of embodiment in the content of the law. I claim that even relatively small differences in the ontologies of entities could lead to the need to create new sets of rights. I use the example of Neanderthals to illustrate that entities similar to us might have required different legal statuses. Then, I discuss the potential legal status of human-like robots.


The place of robots in the law universe depends on many things. One is our decision about their moral status, but even if we accept that some robots are equal to humans, this does not mean that they have the same legal status as humans. Law, as a human product, is tailored to a human being who has a body. Embodiment impacts the content of law, and entities with different ontologies are not suited to human law. As discussed here, Neanderthals, who are very close to us from a biological point of view, and human-like robots cannot be counted as humans by law. Doing so would be anthropocentric and harmful to such entities because it could ignore aspects of their lives that are important for them. It is certain that the current law is not ready for human-like robots.

Here is a summary: 

In terms of robot rights, one factor to consider is the nature of robots. Robots are becoming increasingly sophisticated, and some experts believe that they may eventually become as intelligent as humans. If this is the case, then it is possible that robots could deserve the same rights as humans.

Another factor to consider is the relationship between humans and robots. Humans have a long history of using animals, and some people argue that robots are simply another form of animal. If this is the case, then it is possible that robots do not deserve the same rights as humans.
  • The question of robot rights is a complex one, and there is no easy answer.
  • The nature of robots and the relationship between humans and robots are two important factors to consider when thinking about robot rights.
  • It is important to start thinking about robot rights now, before robots become too sophisticated.

Wednesday, July 26, 2023

Zombies in the Loop? Humans Trust Untrustworthy AI-Advisors for Ethical Decisions

Krügel, S., Ostermaier, A. & Uhl, M.
Philos. Technol. 35, 17 (2022).


Departing from the claim that AI needs to be trustworthy, we find that ethical advice from an AI-powered algorithm is trusted even when its users know nothing about its training data and when they learn information about it that warrants distrust. We conducted online experiments where the subjects took the role of decision-makers who received advice from an algorithm on how to deal with an ethical dilemma. We manipulated the information about the algorithm and studied its influence. Our findings suggest that AI is overtrusted rather than distrusted. We suggest digital literacy as a potential remedy to ensure the responsible use of AI.


Background: Artificial intelligence (AI) is increasingly being used to make ethical decisions. However, there is a concern that AI-powered advisors may not be trustworthy, due to factors such as bias and opacity.

Research question: The authors of this article investigated whether humans trust AI-powered advisors for ethical decisions, even when they know that the advisor is untrustworthy.

Methods: The authors conducted a series of experiments in which participants were asked to make ethical decisions with the help of an AI advisor. The advisor was either trustworthy or untrustworthy, and the participants were aware of this.

Results: The authors found that participants were more likely to trust the AI advisor, even when they knew that it was untrustworthy. This was especially true when the advisor was able to provide a convincing justification for its advice.

Conclusions: The authors concluded that humans are susceptible to "zombie trust" in AI-powered advisors. This means that we may trust AI advisors even when we know that they are untrustworthy. This is a concerning finding, as it could lead us to make bad decisions based on the advice of untrustworthy AI advisors.  By contrast, decision-makers do disregard advice from a human convicted criminal.

The article also discusses the implications of these findings for the development and use of AI-powered advisors. The authors suggest that it is important to make AI advisors more transparent and accountable, in order to reduce the risk of zombie trust. They also suggest that we need to educate people about the potential for AI advisors to be untrustworthy.