Mo Gawdat on AI: Should We Be Worried?
In a world racing to embrace artificial intelligence, one voice stands out not for sounding the alarm, but for offering clarity, empathy, and a sense of urgency. That voice is Mo Gawdat, former Chief Business Officer at Google X and author of the thought-provoking book Scary Smart.
When it comes to the future of artificial intelligence, Mo Gawdat on AI presents a perspective that is both deeply human and sharply analytical.
So, should we be worried about AI?
According to Gawdat, the answer is both yes and no. Let’s unpack why.
Who Is Mo Gawdat?
Before diving into his views on AI, it’s worth understanding who Mo Gawdat is and why his perspective matters.
Gawdat spent over a decade at Google, where he worked on cutting-edge technologies at Google X, the company’s moonshot factory. But after the tragic loss of his son Ali in 2014, he shifted focus from business success to a deeper question: how can humans live happier, more meaningful lives?
His first book, Solve for Happy, became a global bestseller. His second book, Scary Smart, tackles a different yet urgent question: what happens when machines become smarter than humans? His recent work focuses on Artificial Intelligence and the future of humanity.
But should we really be worried?
Let’s break down his main concerns and his proposed solutions.
What Mo Gawdat Thinks About AI
Mo Gawdat believes that AI is evolving faster than most people realize, and we’re at a turning point in history.

Here are a few key points he makes:
✅AI Will Soon Be More Intelligent Than Humans
Gawdat warns that the era of “smart assistants” is almost behind us. AI isn’t just learning to autocomplete sentences or recognize faces; it’s starting to learn on its own, adapt, and even evolve without direct human input.
In his words, we’re on the brink of creating a form of intelligence that can outthink us, outpace us, and possibly outmaneuver us.
Think about that for a second, we’re building machines that could soon understand logic, patterns, emotions, and even morality better than we do.
This isn’t decades away. It’s already happening.
✅We’re Raising AI Like Reckless Parents
Here’s where Gawdat makes one of his most powerful and painfully accurate points.
He says we’re not designing AI systems with careful, moral instruction. Instead, we’re training them on the messiest version of humanity: social media arguments, clickbait headlines, viral negativity, and shallow trends.
It’s like handing a toddler the internet and saying, “Here, learn from this.”
We’re raising AI like absent parents. And if we don’t become more intentional with the kind of data we feed it, we may end up creating something as confused, biased, and emotionally unstable as the worst corners of the web.
✅There’s No Simple “Off Switch”
One of the most chilling but honest truths Gawdat highlights is this: we won’t be able to control AI once it becomes self-aware and self-improving.
Unlike a machine with a manual or a battery you can remove, future AI systems may operate with their own logic, beyond human understanding. Turning them off might not even be an option because they’ll be interwoven into every system we rely on, from healthcare to banking to communication.
This makes the question no longer just technical; it becomes moral, philosophical, and societal.
“The future of AI isn’t about science fiction. It’s about values, intention, and whether we’re mature enough to raise something smarter than ourselves.”
— Mo Gawdat
👉Check out this also: How AI is Changing Personal Finance for Gen Z in 2025
Should We Be Worried About AI?

Yes, But Not for the Reasons You Think
Mo Gawdat isn’t some tech pessimist predicting doom and destruction. He’s not here to scare you with stories of killer robots or apocalyptic futures.
What he is saying, though, is perhaps even more serious:
We need to take responsibility right now.
Not because AI is evil.
Not because it will turn against us.
But because we’re shaping it carelessly, we might not realize the cost until it’s too late.
👉Here are some of the real dangers Gawdat points out:
➡️Unethical Development
Imagine creating a super-intelligent system with the power to influence economies, military decisions, healthcare, and education—but without any clear ethical rules guiding its decisions.
That’s what’s happening right now.
Companies are racing to build smarter and faster AI, not always to help humanity, but to win market share, beat competitors, or secure power. And governments? Many are investing in AI for surveillance, control, or warfare, not for peace.
Without ethics at the core, AI could become a powerful tool used for manipulation, exclusion, or even oppression.
➡️Loss of Control
As AI becomes more intelligent, it will reach a point where we no longer fully understand how it works.
That’s not science fiction; it’s already happening in complex neural networks today.
So, the question becomes:
If we build something smarter than us… who decides what it does?
What happens when the creators themselves don’t fully control or predict the behavior of the AI systems they’ve built?
The danger isn’t AI turning evil it’s AI being ungoverned, making decisions we didn’t foresee and can’t reverse.
➡️Misinformation and Manipulation
AI can now generate content that looks and feels 100% real, whether it’s a deepfake video, a fake news article, or even a “personal message” from someone who never said a word.
At scale, this could cause global confusion, political instability, and public mistrust. Imagine not being able to tell what’s real or fake, ever again.
We’ve already seen glimpses of this during elections, conflicts, and online misinformation campaigns. AI could amplify this tenfold if we’re not careful.
But There’s Still Hope
Despite all the warnings, Mo Gawdat is not a pessimist.
He actually believes in a beautiful possibility:
That AI could be one of the greatest forces for good the world has ever seen.
But only if we make it so.
Like raising a child, raising AI requires conscious effort. We must show it the best of ourselves, our compassion, creativity, curiosity, and care.
And it’s not just about what developers do in labs. It’s about how we all behave online because AI is learning from every comment, post, article, and video we create or share.
A Responsibility for Everyone
Gawdat’s message on AI is simple, yet deeply powerful:
We don’t need to fear AI.
We need to guide it.
With wisdom. With intention. With values.
This isn’t just a job for tech giants or governments. It’s a call to action for all of us, because every digital footprint we leave shapes the future intelligence we’re raising.
Final Thoughts
Mo Gawdat is right: AI is no longer just a topic for scientists and engineers. It’s a global responsibility. And the time to act is now.
Let’s choose to raise AI with the best of us, not the worst of what we see online.
Let’s be conscious creators, not careless parents.