Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Tuesday, March 5, 2024

You could lie to a health chatbot – but it might change how you perceive yourself

Dominic Wilkinson
The Conversation
Originally posted 8 FEB 24

Here is an excerpt:

The ethics of lying

There are different ways that we can think about the ethics of lying.

Lying can be bad because it causes harm to other people. Lies can be deeply hurtful to another person. They can cause someone to act on false information, or to be falsely reassured.

Sometimes, lies can harm because they undermine someone else’s trust in people more generally. But those reasons will often not apply to the chatbot.

Lies can wrong another person, even if they do not cause harm. If we willingly deceive another person, we potentially fail to respect their rational agency, or use them as a means to an end. But it is not clear that we can deceive or wrong a chatbot, since they don’t have a mind or ability to reason.

Lying can be bad for us because it undermines our credibility. Communication with other people is important. But when we knowingly make false utterances, we diminish the value, in other people’s eyes, of our testimony.

For the person who repeatedly expresses falsehoods, everything that they say then falls into question. This is part of the reason we care about lying and our social image. But unless our interactions with the chatbot are recorded and communicated (for example, to humans), our chatbot lies aren’t going to have that effect.

Lying is also bad for us because it can lead to others being untruthful to us in turn. (Why should people be honest with us if we won’t be honest with them?)

But again, that is unlikely to be a consequence of lying to a chatbot. On the contrary, this type of effect could be partly an incentive to lie to a chatbot, since people may be conscious of the reported tendency of ChatGPT and similar agents to confabulate.


Here is my summary:

The article discusses the potential consequences of lying to a health chatbot, even though it might seem tempting. It highlights a situation where someone frustrated with a wait for surgery considers exaggerating their symptoms to a chatbot screening them.

While lying might offer short-term benefits like quicker attention, the author argues it could have unintended consequences:

Impact on healthcare:
  • Inaccurate information can hinder proper diagnosis and treatment.
  • It contributes to an already strained healthcare system.
Self-perception:
  • Repeatedly lying, even to a machine, can erode honesty and integrity.
  • It reinforces unhealthy avoidance of seeking professional help.
The article encourages readers to be truthful with chatbots for better healthcare outcomes and self-awareness. It acknowledges the frustration with healthcare systems but emphasizes the importance of transparency for both individual and collective well-being.