Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Saturday, June 29, 2024

OpenAI insiders are demanding a “right to warn” the public

Sigal Samuel
Vox.com
Originally posted 5 June 24

Here is an excerpt:

To be clear, the signatories are not saying they should be free to divulge intellectual property or trade secrets, but as long as they protect those, they want to be able to raise concerns about risks. To ensure whistleblowers are protected, they want the companies to set up an anonymous process by which employees can report their concerns “to the company’s board, to regulators, and to an appropriate independent organization with relevant expertise.” 

An OpenAI spokesperson told Vox that current and former employees already have forums to raise their thoughts through leadership office hours, Q&A sessions with the board, and an anonymous integrity hotline.

“Ordinary whistleblower protections [that exist under the law] are insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated,” the signatories write in the proposal. They have retained a pro bono lawyer, Lawrence Lessig, who previously advised Facebook whistleblower Frances Haugen and whom the New Yorker once described as “the most important thinker on intellectual property in the Internet era.”


Here are some thoughts:

AI development is booming, but with great power comes great responsibility, typed the Spiderman fan.  AI researchers at OpenAI are calling for a "right to warn" the public about potential risks. In clinical psychology, we have a "duty to warn" for violent patients. This raises important ethical questions. On one hand, transparency and open communication are crucial for responsible AI development.  On the other hand, companies need to protect their ideas.  The key seems to lie in striking a balance.  Researchers should have safe spaces to voice concerns without fearing punishment, and clear guidelines can help ensure responsible disclosure without compromising confidential information.

Ultimately, fostering a culture of open communication is essential to ensure AI benefits society without creating unforeseen risks.  AI developers need similar ethical guidelines to psychologists in this matter.