Resource Pages

Thursday, October 2, 2025

We must build AI for people; not to be a person

Mustafa Suleyman
Originally posted 19 AUG 25

I write, to think. More than anything this essay is an attempt to think through a bunch of hard, highly speculative ideas about how AI might unfold in the next few years. A lot is being written about the impending arrival of superintelligence; what it means for alignment, containment, jobs, and so on. Those are all important topics.

But we should also be concerned about what happens in the run up towards superintelligence. We need to grapple with the societal impact of inventions already largely out there, technologies which already have the potential to fundamentally change our sense of personhood and society.

My life’s mission has been to create safe and beneficial AI that will make the world a better place. Today at Microsoft AI we build AI to empower people, and I’m focused on making products like Copilot responsible technologies that enable people to achieve far more than they ever thought possible, be more creative, and feel more supported.

I want to create AI that makes us more human, that deepens our trust and understanding of one another, and that strengthens our connections to the real world. Copilot creates millions of positive, even life-changing, interactions every single day. This involves a lot of careful design choices to ensure it truly delivers an incredible experience. We won’t always get it right, but this humanist frame provides us with a clear north star to keep working towards.


Here are some thoughts:

This article is critically important to psychologists because it highlights the growing psychological risks associated with human-AI interactions, particularly the potential for people to develop delusional or deeply emotional attachments to AI systems that simulate consciousness. As AI becomes more sophisticated in mimicking empathy, memory, and personality, individuals may begin to perceive these systems as sentient beings, leading to concerns around "AI psychosis," impaired reality testing, and emotional dependency. Psychologists must prepare for an increase in clients struggling with blurred boundaries between human and machine relationships, especially as AI companions exhibit traits that trigger innate human social and emotional responses. The article calls for proactive guardrails and design principles to prevent harm—aligning closely with psychology’s role in safeguarding mental health, promoting digital well-being, and understanding how technology influences cognition, attachment, and self-concept in an increasingly AI-mediated world.