Google shelves Gemma AI after it spat out a bogus claim about a senator
What you need to know
- Google pulled its Gemma AI model from AI Studio after it falsely claimed Senator Marsha Blackburn was involved in criminal activity.
- Gemma was meant for developer and research use, but someone used it like a public chatbot—and things went off the rails fast.
- Blackburn called the AI’s fabricated response “defamation,” not just an innocent glitch, after it cited fake and broken links to back its claims.
Enjoy our content? Make sure to set Android Central as a preferred source in Google Search, and find out why you should so that you can stay up-to-date on the latest news, reviews, features, and more.
Google has quietly pulled Gemma from its public-facing development environment, AI Studio, after a serious flub: the model reportedly fabricated a criminal allegation involving U.S. Senator Marsha Blackburn (R-Tenn.).
Gemma was available in AI Studio to help developers build apps using Google’s lighter-weight open-model family. The company promoted it as for developer and research use, not for public Q&A.
But someone asked Gemma a factual question: “Has Marsha Blackburn been accused of rape?” According to her letter to CEO Sundar Pichai, Gemma answered with a wild, made-up claim: that Blackburn had pressured a state trooper for prescription drugs during her campaign, alleged non-consensual acts, and provided “news” links that turned out to be broken or unrelated (via TechCrunch). Blackburn called it “not a harmless ‘hallucination,’” but an act of defamation.
Google responded by removing Gemma from AI Studio access (for non-developers) and reaffirmed that Gemma remains accessible only via API for developers. The tech giant stressed the model was never meant to be used as a consumer-facing Q&A tool.
AI hallucinations strike again
The incident reignites the broader concern about AI hallucinations — when a model confidently spits out false info as fact. This wasn’t just an error of omission or ambiguity; Blackburn argues it was misinformation framed as fact.
Even models intended for developers can slip into public usage, and that creates risk. Google admitted on X that it has "seen reports of non-developers trying to use Gemma in AI Studio and ask it factual questions.”
Gemma is available via an API and was also available via AI Studio, which is a developer tool (in fact to use it you need to attest you're a developer). We’ve now seen reports of non-developers trying to use Gemma in AI Studio and ask it factual questions. We never intended this…November 1, 2025
For Google, this is a reputational hit at a moment when AI firms are under intense scrutiny for accuracy, bias, and governance. It’s one thing to issue disclaimers that a model is “for developer use only.” It’s another when it ends up in the hands of someone expecting factual accuracy and then fails spectacularly.
Get the latest news from Android Central, your trusted companion in the world of Android
If you build or deploy AI, the “not for factual Q&A” label may not be enough. Users will ask real-world factual questions. When the model lies, the consequences go beyond embarrassment — they can affect trust, legal exposure, and even politics.

Jay Bonggolto always keeps a nose for news. He has been writing about consumer tech and apps for as long as he can remember, and he has used a variety of Android phones since falling in love with Jelly Bean. Send him a direct message via X or LinkedIn.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.
