Sundar Pichai Warning: Don’t Rely Blindly on AI Tools
Google’s parent-company CEO, Sundar Pichai, has warned people not to “blindly trust” everything AI systems generate.
In an exclusive interview with the BBC, he explained that AI models can still “make mistakes,” urging users to rely on them alongside other sources of information.
Importance of a Diverse and Reliable Information Ecosystem
Pichai emphasized the need for a diverse information ecosystem rather than depending solely on AI.
“This is why people still use Google Search,” he said, noting that other products are designed to deliver more reliable, grounded information.
However, some experts argue that companies like Google should focus on improving system accuracy instead of asking users to double-check AI output.
While AI can be useful “for creative writing,” Pichai noted that people must “understand what these tools are good at and avoid trusting them unquestioningly.”
He also highlighted that, despite Google’s efforts to provide accurate results, today’s most advanced AI models still produce errors.
Google includes warnings on its AI products to inform users that mistakes may happen—but that has not prevented criticism.
Google’s AI Overviews feature, which summarizes search results, faced backlash and ridicule due to inaccurate, bizarre responses.
The broader issue—generative AI tools inventing or repeating false information—remains a major concern among specialists.
“We know these systems fabricate answers to satisfy the user—and that’s a problem,” said Gina Neff, a professor of responsible AI at Queen Mary University of London.
She stressed that casual questions like movie recommendations are harmless, but inaccurate answers to topics such as health or news could be dangerous.
Neff also argued that Google should take more accountability for the reliability of its systems instead of shifting responsibility to users:
“The company is essentially grading its own test while the school burns down,” she said.

A New Phase for Google’s AI
Tech observers have been watching the launch of Google’s newest consumer AI model, Gemini 3.0, which is beginning to regain market share lost to ChatGPT.
Google introduced the model on Tuesday, calling it “a new era of intelligence” across its services—including Search.
According to Google, Gemini 3 delivers top-tier performance in interpreting and responding to various inputs like photos, audio, and video, along with improved reasoning abilities.
In May, Google rolled out “AI Mode” in Search, integrating Gemini to offer an expert-like conversational experience.
Pichai described this as a “new phase” in the evolution of AI platforms.
The move helps Google remain competitive with alternatives such as ChatGPT, which has challenged the company’s dominance in online search.
Earlier BBC research showed that many AI chatbots—including ChatGPT, Copilot, Gemini, and Perplexity—produced “significant inaccuracies” when summarizing BBC news articles.
Further findings suggest that AI systems still misrepresent news nearly half the time.
Final Thoughts: Balancing Progress and Risk
Pichai acknowledged the tension between rapid AI development and the need for safeguards to limit harm. For Alphabet, he said the goal is to be “bold and responsible at once.” He added that Google is increasing its investment in AI safety proportionally with its broader AI spending.
“For example, we’ve open-sourced technology to help detect when an image has been AI-generated,” he said.
When asked about Elon Musk’s past claims warning that DeepMind could create an AI “dictatorship,” Pichai said no single company should control such powerful technology. But he pointed out that today the AI industry includes many major players. “If there were only one company building AI and everyone else had to rely on it, I’d be worried—but we’re nowhere near that situation,” he said.





