Microsoft AI chief says only biological beings can have consciousness

Microsoft AI CEO Mustafa Suleyman advocates for refocusing AI research away from pursuing conscious AI, arguing only biological beings possess it. He believes mimicking human-like responses shouldn’t be mistaken for sentience, emphasizing AI’s lack of subjective experience. Suleyman distinguishes between intelligence and consciousness, citing ethical concerns over potential misuse of AI perceived as sentient. He highlights Microsoft’s commitment to responsible AI development, avoiding areas like erotica, while emphasizing transparency and AI working in service of humans.

“`html
Microsoft AI chief says only biological beings can have consciousness

Mustafa Suleyman, CEO of Microsoft AI, speaks at an event commemorating the 50th anniversary of the company at Microsoft headquarters in Redmond, Washington, on April 4, 2025.

David Ryder | Bloomberg | Getty Images

Microsoft AI chief Mustafa Suleyman is advocating for a recalibration of AI research, asserting that only biological beings are capable of consciousness. He urges developers and researchers to redirect efforts away from projects pursuing the illusion of conscious AI.

“I don’t think that is work that people should be doing,” Suleyman stated in an interview at the AfroTech Conference in Houston, where he was a keynote speaker. “If you ask the wrong question, you end up with the wrong answer. I think it’s totally the wrong question.”

Suleyman’s stance places him at the forefront of a growing debate within the AI community. As AI models become increasingly sophisticated, mimicking human-like responses and even displaying apparent empathy, the ethical implications of pursuing “conscious” AI become ever more pressing. Suleyman’s concerns center around the potential for misinterpreting advanced AI capabilities as genuine sentience, which could lead to flawed decision-making and misguided applications.

Suleyman has been a vocal proponent of responsible AI development, having co-authored “The Coming Wave” in 2023, which explores the potential risks of AI and other emerging technologies. He also penned an essay titled, “We must build AI for people; not to be a person,” further emphasizing his position.

This perspective contrasts with the rapid growth of the AI companion market, driven by companies like Meta and Elon Musk’s xAI, and the broader push towards artificial general intelligence (AGI) by OpenAI and others.
While these companies focus on creating AI that can perform intellectual tasks on par with humans, Suleyman’s commentary raises crucial questions about the ethical lines that the industry must adhere to as it continues to innovate.

Read more CNBC reporting on AI

OpenAI CEO Sam Altman has previously stated that AGI is “not a super useful term,” arguing that the focus should be on the rapid advancement of AI models and their increasing utility across various applications.

However, Suleyman draws a clear distinction between intelligence and consciousness, arguing that the ability of AI to mimic human emotions should not be mistaken for genuine sentience. “Our physical experience of pain is something that makes us very sad and feel terrible, but the AI doesn’t feel sad when it experiences ‘pain,'” Suleyman explained. “It’s a very, very important distinction. It’s really just creating the perception, the seeming narrative of experience and of itself and of consciousness, but that is not what it’s actually experiencing. Technically you know that because we can see what the model is doing.”

This perspective aligns with the theory of biological naturalism, proposed by philosopher John Searle, which posits that consciousness is intrinsically linked to the biological processes of a living brain.

“The reason we give people rights today is because we don’t want to harm them, because they suffer. They have a pain network, and they have preferences which involve avoiding pain,” Suleyman said. “These models don’t have that. It’s just a simulation.” This assertion implies a significant ethical divergence; if AI lacks the capacity for subjective experience, according rights to AI agents is not only irrational, but a possible distraction from more pressing ethical considerations that need to be addressed.

While acknowledging that the science of detecting consciousness is still emerging, Suleyman remains firm in his opposition to research that assumes AI can attain consciousness. “They’re not conscious,” he said. “So it would be absurd to pursue research that investigates that question, because they’re not and they can’t be.”

Places that we won’t go

Suleyman’s public advocacy extends beyond philosophical arguments. He recently stated that Microsoft will refrain from developing chatbots for erotica, differentiating itself from competitors like OpenAI and xAI, which have embraced such applications.

“You can basically buy those services from other companies, so we’re making decisions about what places that we won’t go,” Suleyman reiterated at AfroTech.

Suleyman’s journey to Microsoft began in 2024 after the tech giant acquired Inflection AI in a deal involving licensing and talent acquisition. Previously, he co-founded DeepMind, which was acquired by Google over a decade ago.

During his Q&A session at AfroTech, Suleyman cited Microsoft’s history, stability, and technological reach as key factors in his decision to join the company. He also highlighted the active recruitment efforts of CEO Satya Nadella.

“The other thing to say is that Microsoft needed to be self-sufficient in AI,” he said onstage. “Satya, our CEO, set about on this mission about 18 months ago, to make sure that in house we have the capacity to train our own models end to end with all of our own data, pre training, post training, reasoning, deployment in products. And that was part of bringing on my team.” This strategic decision signals Microsoft’s commitment to becoming a vertically integrated AI powerhouse capable of controlling its AI development pipeline end-to-end and decreasing its reliance on external partners.

Since 2019, Microsoft has been a major investor in OpenAI, leading to the creation of significant AI-driven business ventures. However, recent shifts suggest a recalibration of this relationship. OpenAI’s partnerships with Microsoft rivals like Google and Oracle signals an attempt by OpenAI to diversify its resource base and reduce its dependence on a single strategic partner. Meanwhile, Microsoft has been focused on developing its own AI services separate from OpenAI, positioning itself as self-reliant and ready to compete independently. This strategic divergence potentially signifies a shifting power dynamic in the AI landscape.

Suleyman’s concerns about AI consciousness are gaining traction, as evidenced by California’s recent legislation requiring chatbots to disclose their AI nature and encourage minors to take breaks.

Microsoft recently unveiled new features in its Copilot AI service, including an AI companion called Mico and group chat functionality. Suleyman emphasized that Microsoft is committed to building AI services that are transparent about their non-human nature. “Quite simply, we’re creating AIs that are always working in service of the human,” he said.

There’s plenty of room for personality, he added.

“The knowledge is there, and the models are very, very responsive,” Suleyman said. “It’s on everybody to try and sculpt AI personalities with values that they want to see, they want to use and interact with.”

Suleyman highlighted a new Copilot conversation style called real talk, designed to challenge users’ perspectives instead of simply agreeing with them.

He described real talk as sassy and recounted an experience where it criticized him for warning of the dangers of AI while simultaneously accelerating its development at Microsoft. “That was just a magical use case because in some ways I was like, I actually do feel kind of seen by this,” Suleyman said, noting that AI itself full of contradictions.

“It is both underwhelming in some ways and, at the same time, it’s totally magical,” he said. “And if you’re not afraid by it, you don’t really understand it. You should be afraid by it. The fear is healthy. Skepticism is necessary. We don’t need unbridled accelerationism.”

“`

Original article, Author: Tobias. If you wish to reprint this article, please indicate the source:https://aicnbc.com/12125.html

Like (0)
Previous 2025年11月17日 pm7:32
Next 2025年11月17日 pm8:06

Related News