Gemini starts spouting nonsense on the web and Android as reports flood in

The Google Gemini app
(Image credit: Derrek Lee / Android Central)

What you need to know

  • Users have taken to Reddit to report and showcase Gemini's odd string of gibberish and nonsensical responses to queries.
  • Many of the examples highlight Gemini delivering massive walls of text with repeated words, random characters, and pieces of code.
  • The issues have occurred on the web and Android variants of Gemini, but restating your question seems to get Gemini back into form.

Google's AI chatbot Gemini has been encountering issues as users begin reporting strange instances of gibberish.

A user on Reddit posted a screenshot of a response from Gemini, which was full of jumbled pieces of code and characters from other languages (via 9to5Google). In the user's provided photo, Gemini responded with a massive chunk of repeating words that didn't help the user with their query.

Another report showed the AI responding to the question, "If 0 was a Roman numeral, how would it look?" Gemini delivered another huge wall of text full of words and symbols that didn't help. One Redditor posted an example of Gemini being completely off base when asked about satellite TV channels.

The AI promptly delivered a wall of "Not Cauchy, Cauchy, Cauchy," which is about the Cauchy Sequence, a mathematical concept.

While not widespread, these problems with Gemini are annoying at worst. The AI isn't entirely broken, as a quick refresh or asking the question again has seemingly worked fine. The publication notes that Gemini's odd gibberish responses have been reported on the web and its mobile Android variant.

Anyone knows what is this? from r/GoogleGeminiAI

Google's generative AI has been on a weird kick lately as users may remember its AI Overviews delivering erroneous responses. Users were receiving tips to glue cheese onto their pizza and a multitude of other outlandish ideas. The company's Head of Search Liz Reid chimed in last week, stating that this is likely the result of its AI not understanding how to weed out "nonsensical" and "satirical" questions.

Such questions create an "information gap" as the questions aren't typical, so there's not much (factual) information. Google's AI pulls much more "user-generated" content which, as we all know, can be a little on the humorous side. Reid stated Google has gone to work to implement a limiter for Search's AI to reduce the "inclusion of satire and humor content."

Nickolas Diaz
News Writer

Nickolas is always excited about tech and getting his hands on it. Writing for him can vary from delivering the latest tech story to scribbling in his journal. When Nickolas isn't hitting a story, he's often grinding away at a game or chilling with a book in his hand.

    Google is losing the AI race so hard right now.

    Between the useless AI summary which is not only intrusive and cant be turned off in settings, it gives false or irrelevant information.

    Gemini, presumably using a chunk of aforementioned code (or rather vice-versa) is also dumb as a brick. I've seen some really dumb, outlandish and outright wrong information but now gibberish?

    Google, your patient is brain dead. Time to pull the plug and give up like with your other products.